Archive for the ‘Mathematics’ Category

Free Will Destroys The Holographic Principle

February 12, 2017

Abstract: Many famous physicists promote (themselves and) the “Holographic Universe” (aka the “Holographic Principle”). I show that the Holographic Universe is incompatible with the notion of Free Will.


When studying Advanced Calculus, one discovers situations where the information on the boundary of a locale enables to reconstitute the information inside. From my mathematical philosophy point of view, this phenomenon is a generalization of the Fundamental Theorem of Calculus. That says that the sum of infinitesimals df is equal to the value of the function f on its boundary.

The Fundamental Theorem of Calculus was discovered by the French lawyer and MP, Fermat, usually rather known for proposing a theorem in Number Theory, which took nearly 400 years to be proven! Fermat actually invented calculus, a bigger fish he landed while Leibniz and Newton’s parents were in diapers.

As Wikipedia puts it, inserting a bit of francophobic fake news for good measure:  Fermat was the first person known to have evaluated the integral of general power functions. With his method, he was able to reduce this evaluation to the sum of geometric series.[10] The resulting formula was helpful to Newton, and then Leibniz, when they independently developed the fundamental theorem of calculus.” (Independently of each other, but not of Fermat; Fermat published his discovery in 1629. Newton and Leibniz were born in 1642 and 1646…)  

Holography is a fascinating technology.  

Basic Setup To Make A Hologram. Once the Object, The Green Star, Has Fallen Inside A Black Hole, It’s Clearly Impossible To Make A Hologram of the Situation, If Free Will Reigns Inside the Green Star.

Basic Setup To Make A Hologram. Once the Object, The Green Star, Has Fallen Inside A Black Hole, It’s Clearly Impossible To Make A Hologram of the Situation, If Free Will Reigns Inside the Green Star.

The objection is similar to that made in Relativity with light: if one goes at the speed of light (supposing one could), and look at a mirror, the light to be reflected could never catch-up with the mirror. Hence, once reaching the speed of light, one could not look oneself into a mirror. Einstein claimed he got this idea when he was 16-year-old (cute, but by then others had long figured out the part off Relativity pertaining to that situation…

My further objection below is going to be a bit more subtle.


Here Is The Holographic Principle As Described In Wikipedia:

The holographic principle is a principle of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region—preferably a light-like boundary like a gravitational horizon. First proposed by Gerard ‘t Hooft, it was given a precise string-theory interpretation by Leonard Susskind[1] who combined his ideas with previous ones of ‘t Hooft and Charles Thorn.[1][2] As pointed out by Raphael Bousso,[3] Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way.

In a larger sense, the theory suggests that the entire universe can be seen as two-dimensional information on the cosmological horizon, the event horizon from which information may still be gathered and not lost due to the natural limitations of spacetime supporting a black hole, an observer and a given setting of these specific elements,[clarification needed] such that the three dimensions we observe are an effective description only at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the particle horizon has a non-zero area and grows with time.[4][5]

The holographic principle was inspired by black hole thermodynamics, which conjectures that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon.


The Superficiality Principle Rules:

I long suspected that physicists and mathematicians are taken by the beauty of the simplification of knowing the inside from the outside. It’s a sort of beauty, fashion model way of looking at the world. It miserably fails with Black Holes.

To figure this out, one needs to know one thing about Black Holes, and another in philosophy of mind.



My reasoning is simple:

  1. Consider a Black Hole so large that a human being can fall into it without been shredded by tidal effects. A few lines of high school computation show that a Milky Way sized volume with the density of air on Earth is a Black Hole: light falling into it, cannot come back. (Newton could have made the computation and Laplace did it.)
  2. So here we have this Human (call her H), falling in the Milky Way Air Black Hole (MWAB).
  3. Once past the boundary of the Black Hole, Human H cannot be communicated with from the outside of the boundary (at least from known physics).
  4. What the Holographic proponent claim is that they can know what is inside the MWAB.
  5. Suppose that Human H decides to have scrambled eggs for breakfast instead of pancakes. The partisans of the Holographic Universe claim that they had the information already. However they stand outside of the MWAB, the giant Black Hole, and cannot communicate with its interior. Nevertheless, Susskind and company claim they knew it all along.

That is obviously grotesque. (Except if you believe Stanford physicists are omniscient, omnipotent gods, violating known laws of physics: that is basically what they claim.)

This is not as ridiculous as the multiverse (the most ridiculous theory ever). But it’s pretty ridiculous too. (Not to say that the questions Free Will lead to in physics are all ridiculous: they are not, especially regarding Quantum Theory!)

By the way, there are other objections against the Holographic Universe having to do with the COSMOLOGICAL Event Horizon (in contradistinction of those generated by Black Holes). Another time…


We Are Hypocrites, So We Live From Fake News:

Tellingly, the men promoting the Holographic Universe are Nobel Laureates, or the like. Such men tend to be very ambitious, full of Free Will, ready to say, or do anything, to dominate (I have met dozens in person). It is revealing that so great their Free Will is, that they are ready to contradict what they are all about, to make everybody talk about themselves, and promote their already colossal glories.

Patrice Ayme’

The Quantum Puzzle

April 26, 2016


Is Quantum Computing Beyond Physics?

More exactly, do we know, can we know, enough physics for (full) quantum computing?

I have long suggested that the answer to this question was negative, and smirked at physicists sitting billions of universes on a pinhead, as if they had nothing better to do, the children they are. (Just as their Christian predecessors in the Middle Ages, their motives are not pure.)

Now an article in the American Mathematical Society Journal of May 2016 repeats (some) of the arguments I had in mind: The Quantum Computer Puzzle. Here are some of the arguments. One often hears that Quantum Computers are a done deal. Here is the explanation from Justin Trudeau, Canada’s Prime Minister, which reflects perfectly the official scientific conventional wisdom on the subject:

(One wishes all our great leaders would be as knowledgeable… And I am not joking as I write this! Trudeau did engineering and ecological studies.)

... Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits...

… Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits…

Before some object that physicists are better qualified than mathematicians to talk about the Quantum, let me point towards someone who is perhaps the most qualified experimentalist in the world on the foundations of Quantum Physics. Serge Haroche is a French physicist who got the Nobel Prize for figuring out how to count photons without seeing them. It’s the most delicate Quantum Non-Demolition (QND) method I have heard of. It involved making the world’s most perfect mirrors. The punch line? Serge Haroche does not believe Quantum Computers are feasible. However Haroche does not suggest how he got there. The article in the AMS does make plenty of suggestions to that effect.

Let me hasten to add some form of Quantum Computing (or Quantum Simulation) called “annealing” is obviously feasible. D Wave, a Canadian company is selling such devices. In my view, Quantum Annealing is just the two slit experiment written large. Thus the counter-argument can be made that conventional computers can simulate annealing (and that has been the argument against D Wave’s machines).

Full Quantum Computing (also called  “Quantum Supremacy”) would be something completely different. Gil Kalai, a famous mathematician, and a specialist of Quantum Computing, is skeptical:

“Quantum computers are hypothetical devices, based on quantum physics, which would enable us to perform certain computations hundreds of orders of magnitude faster than digital computers. This feature is coined “quantum supremacy”, and one aspect or another of such quantum computational supremacy might be seen by experiments in the near future: by implementing quantum error-correction or by systems of noninteracting bosons or by exotic new phases of matter called anyons or by quantum annealing, or in various other ways…

A main reason for concern regarding the feasibility of quantum computers is that quantum systems are inherently noisy. We will describe an optimistic hypothesis regarding quantum noise that will allow quantum computing and a pessimistic hypothesis that won’t.”

Gil Katai rolls out a couple of theorems which suggest that Quantum Computing is very sensitive to noise (those are similar to finding out which slit a photon went through). Moreover, he uses a philosophical argument against Quantum Computing:

It is often claimed that quantum computers can perform certain computations that even a classical computer of the size of the entire universe cannot perform! Indeed it is useful to examine not only things that were previously impossible and that are now made possible by a new technology but also the improvement in terms of orders of magnitude for tasks that could have been achieved by the old technology.

Quantum computers represent enormous, unprecedented order-of-magnitude improvement of controlled physical phenomena as well as of algorithms. Nuclear weapons represent an improvement of 6–7 orders of magnitude over conventional ordnance: the first atomic bomb was a million times stronger than the most powerful (single) conventional bomb at the time. The telegraph could deliver a transatlantic message in a few seconds compared to the previous three-month period. This represents an (immense) improvement of 4–5 orders of magnitude. Memory and speed of computers were improved by 10–12 orders of magnitude over several decades. Breakthrough algorithms at the time of their discovery also represented practical improvements of no more than a few orders of magnitude. Yet implementing Boson Sampling with a hundred bosons represents more than a hundred orders of magnitude improvement compared to digital computers.

In other words, it unrealistic to expect such a, well, quantum jump…

“Boson Sampling” is a hypothetical, and simplest way, proposed to implement a Quantum Computer. (It is neither known if it could be made nor if it would be good enough for Quantum Computing[ yet it’s intensely studied nevertheless.)


Quantum Physics Is The Non-Local Engine Of Space, and Time Itself:

Here is Gil Kalai again:

“Locality, Space and Time

The decision between the optimistic and pessimistic hypotheses is, to a large extent, a question about modeling locality in quantum physics. Modeling natural quantum evolutions by quantum computers represents the important physical principle of “locality”: quantum interactions are limited to a few particles. The quantum circuit model enforces local rules on quantum evolutions and still allows the creation of very nonlocal quantum states.

This remains true for noisy quantum circuits under the optimistic hypothesis. The pessimistic hypothesis suggests that quantum supremacy is an artifact of incorrect modeling of locality. We expect modeling based on the pessimistic hypothesis, which relates the laws of the “noise” to the laws of the “signal”, to imply a strong form of locality for both. We can even propose that spacetime itself emerges from the absence of quantum fault tolerance. It is a familiar idea that since (noiseless) quantum systems are time reversible, time emerges from quantum noise (decoherence). However, also in the presence of noise, with quantum fault tolerance, every quantum evolution that can experimentally be created can be time-reversed, and, in fact, we can time-permute the sequence of unitary operators describing the evolution in an arbitrary way. It is therefore both quantum noise and the absence of quantum fault tolerance that enable an arrow of time.”

Just for future reference, let’s “note that with quantum computers one can emulate a quantum evolution on an arbitrary geometry. For example, a complicated quantum evolution representing the dynamics of a four-dimensional lattice model could be emulated on a one-dimensional chain of qubits.

This would be vastly different from today’s experimental quantum physics, and it is also in tension with insights from physics, where witnessing different geometries supporting the same physics is rare and important. Since a universal quantum computer allows the breaking of the connection between physics and geometry, it is noise and the absence of quantum fault tolerance that distinguish physical processes based on different geometries and enable geometry to emerge from the physics.”


I have proposed a theory which explains the preceding features, including the emergence of space. Let’s call it Sub Quantum Physics (SQP). The theory breaks a lot of sacred cows. Besides, it brings an obvious explanation for Dark Matter. If I am correct the Dark matter Puzzle is directly tied in with the Quantum Puzzle.

In any case, it is a delight to see in print part of what I have been severely criticized for saying for all too many decades… The gist of it all is that present day physics would be completely incomplete.

Patrice Ayme’


December 29, 2015

Thesis: Quantum Waves themselves are what information is (partly) made of. Consciousness being Quantum, shows up as information. Reciprocally, information gets Quantum translated, and then builds the brain, then the mind, thus consciousness. So the brain is a machine debating with the Quantum. Let me explain a bit, while expounding on the way the theory of General Relativity of Ontological Effectiveness, “GROE”:


What is the relationship between the brain and consciousness? Some will point out we have to define our terms: what is the brain, what is consciousness? We can roll out an effective definition of the brain (it’s where most neurons are). But consciousness eludes definition.

Still, that does not mean we cannot say more. And, from saying more, we will define more.

Relationships between definitions, axioms, logic and knowledge are a matter of theory:

Take Euclid: he starts with points. What is a point? Euclid does not say, he does not know, he has to start somewhere. However where that where exactly is may be itself full of untoward consequences (in the 1960s, mathematicians working in Algebraic Geometry found points caused problems; they have caused problems in Set Theory too; vast efforts were directed at, and around points). Effectiveness defines. Consider this:

Effective Ontology: I Compute, Therefore That's What I Am

Effective Ontology: I Compute, Therefore That’s What I Am

Schematic of a nanoparticle network (about 200 nanometres in diameter). By applying electrical signals at the electrodes (yellow), and using artificial evolution, this disordered network can be configured into useful electronic circuits.

Read more at:

All right, more on my General Relativity of Ontological Effectiveness:

Modern physics talks of the electron. What is it? Well, we don’t know, strictly speaking. But fuzzy thinking, we do have a theory of the electron, and it’s so precise, it can be put in equations. So it’s the theory of the electron which defines the electron. As the former could, and did vary, so did the latter (at some point physicist Wheeler and his student Feynman suggested the entire universe what peopled by just one electron going back and forth in time.

Hence the important notion: concepts are defined by EFFECTIVE THEORIES OF THEIR INTERACTION with other concepts (General Relativity of Ontological Effectiveness: GROE).


NATURALLY Occurring Patterns Of Matter Can Recognize Patterns, Make Logic:

Random assemblies of gold nanoparticles can perform sophisticated calculations. Thus Nature can start computing, all by itself. There is no need for the carefully arranged patterns of silicon.

Classical computers rely on ordered circuits where electric charges follow preprogrammed rules, but this strategy limits how efficient they can be. Plans have to be made, in advance, but the possibilities become vast in numbers at such a pace that the human brain is unable to envision all the possibilities. The alternative is to do as evolution itself creates intelligence: by a selection of the fittest. In this case, a selection of the fittest electronic circuits.

(Selection of the fittest was well-known to the Ancient Greeks, 25 centuries ago, 10 centuries before the Christian superstition. The Ancient Greeks, used artificial and natural selection explicitly to create new breeds of domestic animals. However, Anglo-Saxons prefer to name things after themselves, so they can feel they exist; thus selection of the fittest is known by Anglo-Saxons as “Darwinian”. Hence soon we will hear about “Darwinian electronics”, for sure!)

“The best microprocessors you can buy in a store now can do 10 to the power 11 (10^11; one hundred billions) operations per second and use a few hundred watts,” says Wilfred van der Wiel of the University of Twente in the Netherlands, a leader of the gold circuitry effort. “The human brain can do orders of magnitude more and uses only 10 to 20 watts.  That’s a huge gap in efficiency.”

To close the gap, one goes back to basics. The first electronic computers, in the 1940s, tried to mimic what were thought at the time to be brain operations. So the European Union and the USA are trying more of the same, to develop “brain-like” computers that do computations naturally without their innards having been specifically laid out for the purpose. For a few years, the candidate  material that can reliably perform real calculations has been found to be gold.

Van der Wiel and colleagues have observed that clumps of gold grains handle bits of information (=electric charge) in the same way that existing microprocessors do.

Clump of grains computing operate as a unit, in parallel, much as it seems neurons do in the brain. This should improve pattern recognition. A pattern, after all, is characterized by dimension higher than one, and so is a clump operating together. A mask to recognize a mask.

Patterns are everywhere, logics itself are patterns.



So what am I saying, philosophically? I am proposing a (new) foundation for ontology which makes explicit what scientists and prehistoric men have been doing all along. 

The theory of the nature of being is ontology, the “Logic of Being”. Many philosophers, or pseudo-philosophers have wrapped themselves up in knots about what “Being”. (For example, Heidegger, trained as a Catholic seminarian, who later blossomed as a fanatical professional Nazi, wrote a famous book called “Zein und Zeit”, Being and Time. Heidegger tries at some point to obscurely mumble feelings not far removed from some explicit notions in the present essay.)

Things are defined by what they do. And they do what they do in relation with other things.

Where does it stop? Well, it does not. What we have done is define being by effectiveness. This is what mathematicians have been doing all along. Defining things by how they work produce things, and theories, which work. The obvious example is mathematics: it maybe a castle in the sky, but this castle is bristling with guns, and its canon balls are exquisitely precise, thanks to the science of ballistics, a mathematical creation.

Things are what they do. Fundamental things do few things, sophisticated things do many things, and thus have many ways of being.

Some will say: ‘all right, you have presented an offering to the gods of wisdom, so now can we get back to the practical, such as the problems Europe faces?’

Be reassured, creatures of little faith: Effective Ontology is very practical. First of all, that’s what all of physics and mathematics, and actually all of science rest (and it defines them beyond Karl Popper’s feeble attempt).

Moreover, watch Europe. Some, including learned, yet nearly hysterical commenters who have graced this site, are desperately yelling to be spared from a “Federal Europe“, the dreaded “European Superstate“. The theory of Effective Ontology focuses on the essence of Europe. According to Effective Ontology, Europe is what it does.

And  what does Europe do? Treaties. A treaty, in Latin, is “foedus. Its genitive is foederis, and it gives foederatus, hence the French fédéral and from there, 150 years later in the USA, “federal”. Europe makes treaties (with the Swiss (Con)federation alone, the Europe Union has more than 600 treaties). Thus Europe IS a Federal State.

Effective Ontology has been the driver of Relativity, Quantum Physics, and Quantum Field Theory. And this is precisely why those theories have made so many uncomfortable.

Patrice Ayme’


April 25, 2015

Abstract: A new view is seen (“theo-ry”) for the relationship of mind and universe, and mathematics is central. The Mathematical Mind Hypothesis (MMH). The MMH contradicts, explains, and thus overrules Platonism (the ruling explanation for math, among mathematicians). The MMH is the true essence of what makes the Mathematical Universe Hypothesis alluring.


What’s the nature of mathematics? I wrote two essays already, but was told I was just showing off as a mathematician, and the subject was boring. So let me try another angle today.

The nature of mathematics is a particular case of the nature of thinking.

For a number of reasons, deep in today’s physics, as I have (partly) explained in “Einstein’s Error”, many physicists are obsessed with the “Multiverse”, an extreme version of which is the “Mathematical Universe Hypothesis” (MUH), exposed for example by Tegmark, a tenured cosmologist at MIT. Instead of telling people what happened in the first second of the universe, as if I considered myself to be god, I prefer to consider dog:

Dogs LEARN To Choose “y” According To Least Time

Dogs LEARN To Choose “y” According To Least Time

[Dogs can also learn to solve that Calculus of Variation problem in much more difficult circumstances, if the water is choppy, the ground too soft, etc. To have such a mathematical brain allowed the species to catch dinner, and survive.]

The “Multiverse” has its enemies, I am among them. Smolin, a physicist who writes general access books, has tried to say something (as described in Massimo’s Scientia Salon’ “Smolin and the Nature of Mathematics”).

“Smolin,” Massimo, a tenured philosophy professor also a biology PhD, told me “as a counter [to Platonism], offers his model of development of mathematics, which does begin to provide an account for why mathematical theorems are objective (the word he prefers to “true,” in my mind appropriately so).”

My reply:

Smolin is apparently unaware of a whole theory of “truth” in mathematical logic, and of the existence of the work of famous logicians such as Tarski. When Smolin was in the physics department of Berkeley, so was the very famous Tarski, in the mathematics department. Obviously, the young and unknown Smolin never met the elder logician and mathematician, as he is apparently still in no way aware of any of his work.

Thus, what does Smolin say? Nothing recent. Smolin says mathematics is axiomatic, and develops like games. That was at the heart of the efforts of Frege’s mathematical logic, more than 115 years ago. (Bertrand Russell shot Frege’s theory down, by applying the 24 centuries old Cretan Paradox to it; interestingly, Buridan had found a rather modern solution to the problem, in the 14C!) To help sort things out, it was discovered that one could depict Axiomatic Systems with sequences of numbers. Could not Axiomatics then be made rigorously described, strictly predictive?

Gödel showed that this approach could not work in any system containing arithmetic. Other logicians had proven even more general results in the same vein earlier than that (Löwenheim, Skolem and contemporaries). Smolin is now trying to reintroduce it, as if Löwenheim, Skolem, Gödel, and the most spectacular advances in logic of the first half of the Twentieth Century, never happened.

Does Mr. Smolin know this? Not necessarily: he is a physicist rather than a mathematician (like Tarski, or yours truly).

Smolin: “Both the records and the mathematical objects are human constructions which are brought into existence by exercises of human will.”

Smolin: Math brought into existence by HUMAN WILL. Mathematics as will and representation? (To parody Schopenhauer.)

So how come the minds of animals follow mathematical laws? Dogs, in particular, behave according to very complicated applications of calculus.

How come ellipses exist? Have ellipses been brought into existence by Smolin’s “human will”? When a planet follows (more or less) an ellipse, is that a “construction which has been brought into existence by exercises of human will”?

Some will perhaps say that the planet “constructs” nothing. That I misunderstood the planet.

Massimo’s quoted me, and asserted that there was no value whatsoever to the existence of mathematical objects:

I had said: “How come enormously complex and subtle mathematical objects, which are very far from arbitrary, exist out there?”

Massimo replied: “They don’t.”

And that’s it. It reminded me the way God talked in the Qur’an. It is, what it is, says Allah, and his apparent emulator, Massimo. Massimo did not explain why he feels that the spiral of a nautilus does not exist (or maybe, he does not feel that way, because it clearly looks like a spiral). According to Smolin, the spiral is just a “construct of human will”.

If the spiral is a construct of human will, why not the mountains, and the ocean?

I am actually an old enemy of mathematical Platonism. However, I don’t throw the baby with the bath.

I agree that the “Mathematical Universe Hypothesis”, and Platonism in general are erroneous. However that does not mean they are deprived of any value whatsoever.

Ideas never stand alone. They are always part of theories. And idea such as Platonism is actually a vast theory.

MUH is: ‘Our external physical reality is a mathematical structure.’

I do not believe in the MUH. Because of my general sub-quantic theory, which predicts Dark Matter. In my theory, vast quantum interactions leave debris: Dark Matter. That process is essentially chaotic, and indescribable, except statistically (as the Quantum is). propose a completely different route: our mind are constructed by (still hidden) laws which rule the universe. Call that the MATHEMATICAL MIND HYPOTHESIS (MMD).

Here is the MMD: Our internal neurological reality constructs real physical structures we call “mathematics”.

This explains why a dog’s brain can construct the neurological structures it needs to find the solutions of complex problems in the calculus of variations.

Dogs did not learn calculus culturally, by reading books. Indeed. Still they learned, by interacting with the universe. (It’s unconscious learning, but still learning. Most learning we have arose unconsciously.)

From these interactions, dogs’ brains learn to construct structures which solve very complicated calculus of variations problems. As explained by the Mathematical Mind Hypothesis, (hidden) physics shows up in neurological constructions we call mathematics. And those structures, constructed with this yet-unrevealed, not even imagined, physics, are not just mathematical, but they are what we call mathematics, itself. That’s why dogs know mathematics: their brain contain mathematics.

Patrice Ayme’

Technical Note: Some may smirk, and object that my little theory ignores the variation in neurological structure from one creature to the next. Should not those variations mean that one beast’s math is not another beast’s math?

Not so.

Why? We need to go back to Cantor’s fundamental intuition about cardinals, and generalize (from Set Theory to General Topology). Cantor said that two sets had the same cardinal if they were in bijection. (Then he considered order, and introduced “ordinals”, by making the bijection respect order.)

I propose to say two neurological structure are mathematically the same if they produce the same math. (Some will say that’s obvious, but it’s not anymore obvious than, say, “Skolemization“.)

[Last point: I use “neurology” to designate much more than the set of all neurons, dendrites, synapses, axons and attached oligodendrocytes. I designate thus the entire part of the brain which contributes to mind and intelligence (so includes all glial cells, etc.). That ensemble is immensely complex, in dimensions and topologies.]


April 22, 2015


After demolishing erroneous ideas some 25 centuries old, some brand new, I explain why Mathematics Can Be Made To Correspond To A Subset Of Neurology. And Why Probably Neurology Is A Consequence Of Not-Yet Imagined Physics.

Distribution of Prime Numbers Reworked Through Fourier Analysis: It Nearly Looks Like Brain Tissue

Distribution of Prime Numbers Reworked Through Fourier Analysis: It Nearly Looks Like Brain Tissue


Einstein famously declared that: “How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?”

Well, either it is an unfathomable miracle, or something in the premises has to give. Einstein was not at all original here, he was behaving rather like a very old parrot.

That the brain is independent of experience is a very old idea. It is Socrates’ style “knowledge”, a “knowledge” given a priori. From there, naturally enough aroses what one should call the “Platonist Delusion”, the belief that mathematics can only be independent of experience.

Einstein had no proof whatsoever that”thought is independent of experience”. All what a brain does is to experience and deduct. It starts in the womb. It happens even in an isolated brain. Even a mini brain growing in a vat, experiences (some) aspects of the world (gravity, vibrations). Even a network of three neurons experiences a sort of inner world unpredictable to an observer:

Latest Silliness: Smolin’s Triumph of the Will:

The physicist Lee Smolin has ideas about the nature of mathematics:


“the main effectiveness of mathematics in physics consists of these kinds of correspondences between records of past observations or, more precisely, patterns inherent in such records, and properties of mathematical objects that are constructed as representations of models of the evolution of such systems … Both the records and the mathematical objects are human constructions which are brought into existence by exercises of human will; neither has any transcendental existence. Both are static, not in the sense of existing outside of time, but in the weak sense that, once they come to exist, they don’t change”

Patrice Ayme: Smolin implies that “records and mathematical objects are human constructions which are brought into existence by exercises of HUMAN WILL; neither has any transcendental existence”. That’s trivially true: anything human has to do with human will.

However, the real question of “Platonism” is: why are mathematical theorems true?

Or am I underestimating Smolin, and Smolin is saying that right and wrong in mathematics is just a matter of WILL? (That’s reminiscent of Nietzsche, and Hitler’s subsequent obsession with the “will”.)

As I have known Smolin, let me not laugh out loud. (“Triumph of the Will” was a famous Nazi flick.)

I have a completely different perspective. “Human will” cannot possibly determine mathematical right and wrong, as many students who are poor at mathematics find out, to their dismay!

So what determines right and wrong in mathematics? How come enormously complex and subtle mathematical objects, which are very far from arbitrary, exist out there?

I sketched an answer in “Why Mathematics Is Natural”. It does not have to do with transcendence of the will.



Neurology, the logic of neurons, contains what one ought to call axonal logic, a sub-category.

Axonal logic is made of the simplest causal units: neuron (or another subset of the brain) A acts on neuron (or brain subset) B, through an axon. This axonal category, a sub-category, corresponds through a functor, from neurology to mathematical logic. To A, and B are associated a and b, which are propositions in mathematical logic, and to the axon, corresponds a logical implication.

Thus one sees that mathematics corresponds to a part of neurology (it’s a subcategory).

Yet, neurology is vastly more complicated than mathematical logic. We know this in many ways. The very latest research proposes experimental evidence that memories are stored in neurons (rather than synapses). Thus a neuron A is not a simple proposition.

Neurons also respond to at least 50 hormones, neurohormones, dendrites, glial cells. Thus neurons need to be described, they live, into a “phase space” (Quantum concept) a universe with a vast number of dimensions, the calculus of which we cannot even guess. As some of this logic is topological (the logic of place), it is well beyond the logic used in mathematics (because the latter is relatively simplistic, being digital, a logic written in numbers).

The conclusion, an informed guess, is that axons, thus the implications of mathematical logic, are not disposed haphazardly, but according to the laws of a physics which we cannot imagine, let alone describe.

And out of that axonal calculus springs human mathematics.



If my hypothesis is true, mathematics reduces to physics, albeit a neuronal physics we cannot even imagine. Could we test the hypothesis?

It is natural to search for guidance in the way the discovery, and invention, of Celestial Mechanics proceeded.

The Ancient Greeks had made a gigantic scientific mistake, by preferring Plato’s geocentric hypothesis, to the more natural hypothesis of heliocentrism proposed later by Aristarchus of Samos.

The discovery of impetus and the heliocentric system by Buridan and his followers provides guidance. Buridan admitted that, experimentally heliocentrism and “scripture” could not be distinguished.

However, Buridan pointed out that the heliocentric theory was simpler, and more natural (the “tiny” Earth rotated around the huge Sun).

So the reason to choose heliocentrism was theoretical: heliocentrism’s axiomatic was leaner, meaner, natural.

In the end, the enormous mathematical arsenal to embody the impetus theory provided Kepler with enough mathematics to compute the orbit of Mars, which three century later, definitively proved heliocentrism (and buried epicycles).

Here we have a similar situation: it is simpler to consider that mathematics arises from physics we cannot yet guess, rather than the Platonic alternative of supposing that mathematics belong to its own universe out there.

My axiomatic system is simpler: there is just physics out there. Much of it we call by another name, mathematics, because we are so ignorant about the ways our mind thinks.

Another proof? One can make a little experiment. It requires a willing dog, a beach, and a stick. First tell the dog to sit. Then grab the stick, and throw it in the water, at 40 degree angle relative to the beach. Then tell the dog to go fetch the stick. Dogs who have practiced this activity a bit will not throw themselves in the water immediately. Instead they will run on the beach a bit, and then go into the water at an angle that is less than 90 degrees.

A computer analysis reveals that dogs follow exactly the curve of least time given by calculus. Dogs know calculus, but they did not study it culturally! Dogs arrived at correct calculus solutions by something their neurology did. They did not consult with Plato, they did not create calculus with their will as Smolin does.

It’s neurology which invents, constructs the mathematics. It is not in a world out there life forms consult with.

Patrice Ayme’

Why Mathematics Is Natural

April 21, 2015

There is nothing obvious about the mathematics we know. It is basically neurology we learn, that is, that we learn to construct (with a lot of difficulty). Neurology is all about connecting facts, things, ideas, emotions together. We cannot possibly imagine another universe where mathematics is not as given to us, because our neurology is an integral part of the universe we belong to.

Let’s consider the physics and mathematics which evolved around the gravitational law. How did the law arise? It was a cultural, thus neurological, process. More striking, it was a historical process. It took many centuries. On the way, century after century a colossal amount of mathematics was invented, from graph theory, to forces (vectors), trajectories, equations, “Cartesian” geometry, long before Galileo, Descartes, and their successors, were born.

Buridan, around 1330 CE, to justify the diurnal rotation of Earth, said we stayed on the ground, because of gravity. Buridan also wrote that “gravity continually accelerates a heavy body to the end” [In his “Questions on Aristotle”]. Buridan asserted a number of propositions, including some which are equivalent to Newton’s first two laws.

Because, Albert, Your Brain Was Just A Concentrate Of Experiences & Connections Thereof, Real, Or Imagined. "Human Thought Independent of Experience" Does Not Exist.

Because, Albert, Your Brain Was Just A Concentrate Of Experiences & Connections Thereof, Real, Or Imagined. “Human Thought Independent of Experience” Does Not Exist.

At some point someone suggested that gravity kept the heliocentric system together.

Newton claimed it was himself, with his thought experiment of the apple. However it is certainly not so: Kepler believed gravity varied according to 1/d. The French astronomer Bulladius then explained why Kepler was wrong, and gravity should vary as, the inverse of the square of the distance, not just the inverse of the distance. So gravity went by 1/dd (Bulladius was elected to the Royal Society of London before Newton’s birth; Hooke picked up the idea then Newton; then those two had a nasty fight, and Newton recognized Bulladius was first; Bulladius now has a crater on the Moon named after him, a reduced version of the Copernic crater).

In spite of considerable mental confusion, Leonardo finally demonstrated correct laws of motion on an inclined plane. Those Da Vinci laws, more important than his paintings, are now attributed to Galileo (who rolled them out a century later).

It took 350 years of the efforts of the Paris-Oxford school of mathematics, and students of Buridan, luminaries such as Albert of Saxony and Oresme, and Leonardo Da Vinci, to arrive at an enormous arsenal of mathematics and physics entangled…

This effort is generally mostly attributed to Galileo and Newton (who neither “invented” nor “discovered” any of it!). Newton demonstrated that the laws discovered by Kepler implied that gravity varied as 1/dd (Newton’s reasoning, using still a new level of mathematics, Fermat’s calculus, geometrically interpreted, was different from Bulladius).

Major discoveries in mathematics and physics take centuries to be accepted, because they are, basically, neurological processes. Processes which are culturally transmitted, but, still, fundamentally neurological.

Atiyah, one of the greatest living mathematicians, hinted this recently about Spinors. Spinors, discovered, or invented, a century ago by Elie Cartan, are not yet fully understood, said Atiyah (Dirac used them for physics 20 years after Cartan discerned them). Atiyah gave an example I have long used: Imaginary Numbers. It took more than three centuries for imaginary numbers (which were used for the Third Degree equation resolution) to be accepted. Neurologically accepted.

So there is nothing obvious about mathematical and physics: they are basically neurology we learn through a cultural (or experimental) process. What is learning? Making a neurology that makes correspond to the input we know, the output we observe. It is a construction project.

Now where does neurology sit, so to speak? In the physical world. Hence mathematics is neurology, and neurology is physics. Physics in its original sense, nature, something not yet discovered.

We cannot possibly imagine another universe where mathematics is not as given to us, because the neurology it is forms an integral part of the universe we belong to.

Patrice Ayme’

Causality Explained

March 29, 2015


What Is Causality? What is an Explanation?

Pondering the nature of the concept of explanation is the first step in thinking. So you may say that there is nothing more important, nothing more human.

I have a solution. It is simplicity itself. I go for the obvious model:

Mathematics, logic, physics, and the rest of science give a strict definition of what causality, and an explanation is.


Through systems of axioms and theorems.

Some of the sub-systems therein have to do with logic (“Predicate Calculus”). They are found all over science and common sense (although they will not be necessarily present in systems of thought such as, say, poetry, or rhetoric).


A and B are propositions. They do not have to be very precise.

Precision Is Not Necessarily The Smartest. Semantic Web Necessary.

Precision Is Not Necessarily The Smartest. Semantic Web Necessary.

As it turns out, except in Classical Computer Science as it exists today (Classical CS by opposition to Quantum CS, a subject developing in the last 20 years), propositions are never precise (so a degree of poetry is everywhere, even in mathematics!) Propositions, in practice, depend upon a semantic web.

A could be: “Plate Tectonic” and B could be “Continental Drift”. That A causes B is one of axioms of present day geophysics.

Thus I define causality as logical implication.

To use David Hume’s example: flame F brings heat H, always, and so is supposed to cause it: F implies H. Hume deduced causality from observation of the link (if…then).

More detailed modern physics shows that the heat of flame F is agitation that can be transmitted (both a theorem about, and a definition of, heat). Now we have a full, detailed logos about F and what H means, and how F implies H, down to electronic orbitals.

Mathematicians are used to make elaborate demonstrations, and then, to their horror, discover somewhere something that cannot be causally justified. Then they have to reconsider from scratch.

Mathematics is all about causality.

“Causes” in mathematics are also called axioms. In practice, well known theorems are used as axioms to implement further mathematical causality. A mathematician using a theorem from a distant field may not be aware of all the subtleties that allow to prove it: he would use distant theorems he does no know the proof of, as axioms. Some mathematician’s, or logician’s axiom is another’s theorem.

(Hence some hostility between mathematicians and logicians, as much of what the former use the latter proved, but the former have no idea how!)

Causality, by the way, reflects the axonal geometry of the brain.

The full logic of the brain is much more complicated than mathematics, let alone Classical Computer Science, have it. Indeed, brain logic involves much more than axons, such as dendrites, neurotransmitters, glial cells, etc. And of these, only axonal geometry is simple enough to be approximated by classical logic… In first order.

Mathematics is causation. And the ultimate explanation. Mathematics makes causation as limpid we can have it.

This theory met with the approval of Philip Thrift (March 27, 2015): “I agree exactly with the words Patrice Ayme wrote — but with “mathematics”→”programming”, “mathematical”→”programmatical”, etc.”

I pointed out later to Philip that Classical Programming was insufficient to embrace full human (and quantum!) logic. He agreed.

However the preceding somehow made Massimo P , a professional philosopher, uneasy. He quoted me:

“Patrice: “To claim that mathematics is not causal is beyond belief. Mathematics is all about causality.”

Massimo: It most obviously isn’t. What’s causal about Fermat’s Last Theorem? Causality implies physicality, and most of pure math has absolutely nothing whatsoever to do with physicality.

Patrice: “Causes” in mathematics are also called axioms.”

Massimo: “You either don’t understand what causality means or what axioms are. Or both.”

Well, once he had released his emotional steam, Massimo, a self-declared specialist of “physicality” [sic] did not offer one iota of logic in support of his wished-for demolition of my… logic. I must admit my simple thesis is not (yet) in textbooks…

Insults are fundamentally poetic, illogical, or pre-logical. Massimo is saying that been totally confused about causality and explanations is a sacred cow of a whole class of philosophers (to whom he had decided he belongs). Being confused about causality started way back.

“All philosophers, “said Bertrand Russell,” imagine that causation is one of the fundamental axioms of science, yet oddly enough, in advanced sciences, the word ’cause’ never occurs … The law of causality, I believe, is a relic of bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm …”

Russell was as wrong as wrong could be (not about the monarchy, but about “causation”). He wrote the preceding in 1913, when Relativity was well implanted, and he, like many others, was no doubt unnerved by it.

Poincare’ noticed, while founding officially “Relativity” in 1904, that apparent succession of events was not absolute (but depended upon relative motions).


But, temporal succession is only an indication of possible causality. In truth causality exists, if, and only if, a logical system establishes it (moreover, said logic has to be “true”; that, assigning a truth value, is, by itself is a separate question that great logicians have studied without clear conclusions).

When an explanation can be fully mathematized, it is finished. Far from being “abstract”, it has become trivial, or so suppose those with minds for whom mathematics is obvious.

Mathematics is just like 2 + 2 = 4, written very large.

Fermat’s Last Theorem is not different in nature, from 2 + 2 = 4… (But for something very subtle: semantic drift, and a forest of theorems used as axioms to go from side of Fermat’s theorem to the other.)

To brandish mathematics as unfathomable “abstract” sorcery, as was done in Scientia Salon, is a strange, but not new, streak.

There in “Abstract Explanations In Science” Massimo and another employed philosopher pondered “whether, and in what sense, mathematical explanations are different from causal / empirical ones.”

My answer is that mathematical, and, more generally logical, explanations are the model of all explanations. We speak (logos) and thus we communicate our thoughts. Even to ourselves.

The difference between mathematics and logic? Mathematics is more poetical. For example, Category Theory is not anchored in logic, nor anywhere else. It is hanging out there, beautiful and useful, a castle in the sky, just like all and any poem.

Such ought to be the set-up on the nature of what causality could be, to figure out what causality is in the physical world. Considering that Quantum Entanglement is all over nature, this is not going to be easy (and it may contain a hidden clock).

Patrice Ayme’

Emotional Thinking Is Superior Thinking

March 11, 2015

By claiming that emotional thinking is superior, I do not mean that “logical” thinking ought to be rejected. I am just saying what I am saying, and no more. Not, just the opposite, “logical” thinking ought to be embraced. However, there are many “logical” types of thought possible.

Emotional and logical thinking can be physiologically distinguished in the brain (the latter is mostly about axons; the former about the rest).

Any “logical” thinking is literally, a chain made of points. (And there are no points in nature, said a Quantum Angel who passed by; let’s ignore her, for now!)

Elliptic Geometry In Action: Greeks, 240 BCE, Understood The Difference Between Latitude & Geodesic (Great Circle)

Elliptic Geometry In Action: Greeks, 240 BCE, Understood The Difference Between Latitude & Geodesic (Great Circle)

Some say that hard logic, and mathematics is how to implement “correct thinking”. Those who say this, do not know modern logic, as practiced in logic departments of the most prestigious universities.

In truth, overall, logicians spent their careers proposing putative, potential foundations for logic. Ergo, there is no overall agreement, from the specialists of the field themselves, about what constitute acceptable foundations for “logic”.

It is the same situation in mathematics.

Actually dozens of prestigious mathematicians (mostly French) launched themselves, in the 1950s into a project to make mathematics rigorous. They called their effort “Bourbaki”.

Meanwhile some even more prestigious mathematicians, or at least the best of them all, Grothendieck, splendidly ignored their efforts, and, instead, founded mathematics on Category Theory.

Many mathematicians were aghast, because they had no idea whatsoever what Category Theory could be about. They derided it as “Abstract Nonsense”.

Instead it was rather “Abstract Sense”.

But let’s take a better known example: Euclid.

There are two types of fallacies in Euclid.

The simplest one is the logical fallacy of deducing, from emotion, what the axioms did not imply. Euclid felt that two circles which looked like they should intersect, did intersect. Emotionally seductive, but not a consequence of his axioms.

Euclid’s worst fallacy was to exclude most of geometry, namely what’s not in a plane. It’s all the more striking as “Non-Euclidean” geometry had been considered just prior. So Euclid closed minds, and that’s as incorrect as incorrect can be.

To come back to logic as studied by logicians: the logicS considered therein, are much general than those used in mathematics. Yet, as no conclusion was reached, this implies that mathematics itself is illogical. That, of course, is a conclusion mathematicians detest. And the proof of their pudding is found in physics, computer science, engineering.

So what to do, to determine correct arguments? Well, direct towards any argument an abrasive, offensive malevolence, trying to poke holes, just as a mountain lion canines try to pass between vertebras to dislocate a spine.

That’s one approach. The other, more constructive, but less safe, is to hope for the best, and launched logical chains in the multiverses of unchained axiomatics.

Given the proper axioms, (most of) an argument can generally be saved. The best arguments often deserve better axiomatics (so it was with Leibnitz’s infinitesimals).

So, de facto, people have longed been using not just “inverse probability”, but “inverse logic”. In “inverse logic”, axioms are derived from what one FEELS ought to be a correct argument.

Emotions driving axiomatics is more metalogical, than axiomatics driving emotions.


To the preceding philosophy professor Massimo Pigliucci replied (in part) that:


“…Hence, to think critically, one needs enough facts. Namely all relevant facts.”

Enough facts is not the same as all the relevant facts, as incorrectly implied by the use of “namely.” 

“It is arrogant to think that other people are prone to “logical fallacies”.”

It is an observation, and facts are not arrogant. 

“A Quantum Wave evaluates the entirety of possible outcomes, then computes how probable they are.”

Are you presenting quantum waves as agents? They don’t evaluate and compute, they just behave according to the laws of physics.

“just as with the Quantum, this means to think teleologically, no holds barred”

The quantum doesn’t think, as far as I know. 

“Emotional Thinking Is Superior Thinking” 

I have no idea what you mean by that. Superior in what sense? And where’s the bright line between reason and emotion?

“Any “logical” thinking is literally, a chain made of points”

No, definitely not “literally.” 

It may not follow from the axioms, but I am having a hard time being emotionally seductive by intersecting circles. 

“Euclid’s worst fallacy was to exclude most of geometry, namely what’s not in a plane.”

That’s an historically bizarre claim to make. Like saying that Newton’s worst fallacy was to exclude considerations of general relativity. C’mon. 

“as no conclusion was reached, this implies that mathematics itself is illogical” 

Uhm, no. 

“to hope for the best, and launch logical chains in the multiverses of unchained axiomatics” 

Very poetic, I have no idea what that means, though.”


Massimo Pigliucci is professor of philosophy at CUNY in New York, and has doctorates both in biology and philosophy. However, truth does not care about having one, or two thousands doctorates. It would take too long to address all of Massimo’s errors (basically all of his retorts above). Let me just consider two points where he clings to Common Wisdom like a barnacle to a rock. The question of Non-Euclidean geometry, and of the Quantum. He published most of the answer below on his site:

Dear Massimo:

Impertinence and amusement help thought. Thank you for providing both. Unmotivated thought is not worth having.

The Greeks discovered Non-Euclidean geometry. It’s hidden in plain sight. It is a wonder that, to this day, so many intellectuals repeat Gauss’ self-serving absurdities on the subject (Gauss disingenuously claimed that he had discovered it all before Janos Bolyai, but did not publish it because he feared the “cries of the Beotians”… aka the peasants; Gauss does not tell you that a professor of jurisprudence had sketched to him how Non-Euclidean geometry worked… in 1818! We have the correspondence.).

The truth is simpler: Gauss did not think of the possibility of Non-Euclidean geometry (although he strongly suspected Euclidean geometry was not logical). Such a fame greedster could not apparently resist the allure of claiming the greatest prize…

It is pretty abysmal that most mathematicians are not thinking enough, and honest enough, to be publicly aware of Gauss’ shenanigans (Gauss is one of the few Muhammads of mathematics). But that fits the fact that they want mathematics to be an ethereal church, the immense priests of which they are. To admit Gauss got some of his ideas from a vulgar lawyers, is, assuredly, too painful.

That would be too admit the “Prince of Mathematics” was corrupt, thus, all mathematicians too (and, indeed, most of them are! Always that power thing; to recognize ideas have come out of the hierarchy in mathematics is injurious to the hierarchy… And by extension to Massimo.)

So why do I claim the Greeks invented Non-Euclidean geometry? Because they did; it’s a fact. It is like having the tallest mountain in the world in one’s garden, and not having noticed it: priests, and princes, are good at this, thus, most mathematicians.

The Greek astronomer Ptolemy wrote in his Geography (circa 150 CE):

“It has been demonstrated by mathematics that the surface of the land and water is in its entirety a sphere…and that any plane which passes through the centre makes at its surface, that is, at the surface of the Earth and of the sky, great circles.”

Not just this, but, nearly 400 years earlier, Eratosthenes had determined the size of Earth (missing by just 15%).

How? The Greeks used spherical geometry.

Great circles are the “straight lines” of spherical geometry. This is a consequence of the properties of a sphere, in which the shortest distances on the surface are great circle routes. Such curves are said to be “intrinsically” straight.

Better: Eusebius of Caesarea proposed 149 million kilometers for the distance of the Sun! (Exactly the modern value.)

Gauss, should he be around, would whine that the Greeks did not know what they were doing. But the Greeks were no fools. They knew what they were doing.

Socrates killed enemies in battle. Contemporary mathematicians were not afraid of the Beotians, contrarily to Gauss.

Aristotle (384-322 BC) was keen to demonstrate that logic could be many things. Aristotle was concerned upon the dependency of logic on the axioms one used. Thus Aristotle’s Non-Euclidean work is contained in his works on Ethics.

A thoroughly modern approach.

The philosopher Imre Toth observed the blatant presence of Non-Euclidean geometry in the “Corpus Aristotelicum” in 1967.

Aristotle exposed the existence of geometries different from plane geometry. The approach is found in no less than SIX different parts of Aristotle’s works. Aristotle outright says that, in a general geometry, the sum of the angles of a triangle can be equal to, or more than, or less than, two right angles.

One cannot be any clearer about the existence on Non-Euclidean geometry.

Actually Aristotle introduced an axiom, Aristotle’s Axiom, a theorem in Euclidean and Hyperbolic geometry (it is false in Elliptic geometry, thus false on a sphere).

Related to Aristotle’s Axiom is Archimedes’ Axiom (which belongs to modern Model Theory).

One actually finds non trivial, beautiful NON-Euclidean theorems in Aristotle (one of my preferred frienemies).

Non-Euclidean geometry was most natural: look at a sphere, look at a saddle, look at a pillow. In Ethika ad Eudemum, Aristotle rolls out the spectacular example of a quadrangle with the maximum eight right angles sum for its interior angles.

Do Quantum Wave think? Good question, I have been asking it to myself for all too many decades.

Agent: from Latin “agentem”, what sets in motion. Quantum waves are the laws of physics: given a space, they evaluate, compute. This is the whole idea of the Quantum Computer. So far, they have been uncooperative. Insulting them, won’t help.

Patrice Ayme’


October 27, 2014

What is the mind made of? We have progressed enormously as far as the brain objects are concerned: neurons, axons, dendrites, glial cells, neurohormones, various organs and substructures in the brain, etc.

But is there a broad mathematical framework to envision how this is all organized? There is! Category Theory! It turns out it’s a good first order approximation of mind organization. At least, so I claim.

Category Theory is about diagrams. Category Theory has been increasingly replacing advantageously Set Theory. It’s not only because Category Theory does not have to ponder the nature of objects, elements, sets.

Category Theory was long derided as “abstract nonsense” and “diagram chasing”. But it gives very deep, powerful theorems.

I claim the powerful theorems of Category Theory should translate directly into… neurology.

Amusingly, although I accused Aristotle to have demolished democracy and fostered plutocracy through his beloved pets, the mass murdering criminal plutocratic psychopaths, Alexander and Antipater, I recognize humbly that it’s the same Aristotle who invented categories (thus making him a great thinker, and justifying an Aristotle cult among those who need to have cults to feel good about themselves)…

Aristotle’s meta-idea about categories was just to talk about the most fundamental notions:

The present essay was suggested, and is an extension of what the honorable Bill Skaggs seems to have wanted to say, in Scientia Salon, in his “Identity A Neurobiological Perspective”. (As far as I can comprehend.)

However, forget Theseus’ ship and Hollywood’s Star Trek “Transporter”. As I said in “Quantum Identity Is Strong”, Quantum Identity is not erasable, and makes those time honored examples impossibly disconnected with reality. The notion of identity has thus to be found elsewhere (as we intuitively know that there is such a notion).

According to modern Quantum Field Theory, we are made, at the most fundamental level, of fluctuating fields. They come and go, out of nowhere. So, that way, we are continually been deconstructed and rebuilt. The question naturally arises: what is preserved of me, as a set of Quantum Fields? Well, the most fundamental mathematical structure is preserved.

The same seems to hold, to a great extent, in neurobiology, as neuro circuitry, to some extent, seems to come, go, and come back.

Thus we are all like old wooden Greek ships, perpetually falling apart, and rebuilt.

To some extent, this is what happens to species, through reproduction: cells split, and reproduce themselves, thanks to DNA.

A species has identity. Yet that identity is made of DISCONTINUOUS elements: the individuals who incarnate the species, who are born, and then die. And others appear, just the same, sort of. How is that possible?

A species’ identity is its structure. Just as a neurology, or an elementary particle identity is its structure. Not just a geometric structure, not just a topological structure, but its structure, as the most fundamental notion, as a category.

So what is preserved? Shape. And how to morph said shapes… Naturally (there is a notion of natural transformation, in Category Theory).

Historically, analyzing shape was systematized by the Greeks: Euclidean geometry, cones, etc. Then, at the end of the Nineteenth Century, it was found that geometry studied shapes mostly by studying distance, and yet, even if distance was denied consideration, there was a more fundamental notion of shape, topology. That was the structure of shapes as defined by neighborhoods.

Two generations later, Category Theory arrived. Category Theory is about morphisms, and the structures which can be built with them. Please listen to the semantics: structures, morphing… This is all about shapes reduced to their most basic, simplest symbolic expression. It’s no wonder that it would come in handy to visualize neurological structures.

A morphism is a pair of “objects” (CT leaves unspecified what the “objects” are). To model that neurologically, we can just identify ‘objects’ to neurons (or other neurological structures), and morphisms to axons (although dendrites, and more, could be included, in a second stage, when the categoretical modelling become more precise).

The better model is category theory. When are two diagrams equivalent? When are they IDENTICAL? Cantor defined as of the same cardinal two sets in a bijection (a bijection is a 1 to 1, onto map).

Category Theory defines as identical the same diagram (a drawing reduced to its simplest essence). Say: A>B>C>D>A is the same as E>F>G>H>E.

Thus, when are two diagrams identical in category theory? When they are modelled by the same neuronal network. (Or, more exactly, axonal network: make each arrow “>” above, into an axon.) And reciprocally!

Discussing the mind will involve discussing the most fundamental structures constituting it. What better place to start, than the most basic of maths? Especially if it looks readily convertible in neural networks.

Category Theory is the most fundamental theoretical structure we know of. It is the essence of identity, and identification. In conclusion, two objects are identical, neurologically, and in fundamental physics, if they are so, in category theory.

Time to learn something categorically new!

Patrice Ayme’


Note: No True Isolated Rocks: In other news, and to address a point of Bill Skaggs, whether a rock can be truly isolated is an open problem, experimentally speaking.

According to the theory of gravitation of Einstein and company, a rock cannot be isolated. Why? Because the rock is immersed in spacetime. Spacetime is animated by gravitational waves: this is what the Einstein Field Equation implies. Now, according to an unproven, but hoped-for principle of fundamental physics, to each force field is associated a particle. In the case of gravity, that hoped-for particle is called the graviton. “Particle” means a “particular” effect. Thus, an isolated rock, according to established theory, and hoped-for theory, ought to be adorned occasionally with a new particle, a new graviton, thus ought not to be isolated.

In my own theory, Objective Quantum Physics, on top of the preceding standard effect, resolving Quantum Entanglements, ought to create even more particles in “isolated” rocks.

Universe: Not Just Mathematical

August 14, 2014

Some claim the “Universe is mathematical”. Their logic is flawed. I show why.

Max Tegmark, a MIT physics professor, wrote “Our Mathematical Universe”. I present here an abstract I concocted of an interview he just gave to La Recherche. Followed by my own incisive comments. However absurd Tegmark may sound, I changed nothing to the substance of what he said:

La Recherche (France; Special Issue on Reality, July-August 2014): Max, you said “Reality is only mathematical”. What do you mean?

Tegmark: The idea that the universe is a mathematical object is very old. It goes all the way back to Euclid and other Greek scientists. Everywhere around us, atoms, particles are all defined by numbers. Spacetime has only mathematical properties.

La Recherche: Everything is math, according to you?

Formulation Before Revelation of Mathematization

Formulation Before Revelation of Mathematization

Tegmark: Think about your best friend. Her great smile, her sense of humor. All this can be described by equations. Mathematics explain why tomatoes are red and bananas yellow. Brout, Englert, Higgs predicted a boson giving mass to all other particles. Its discovery in 2012 at CERN in Geneva led to the 2013 Nobel Prize in Physics!

Tyranosopher [unamused]: Notice, Max Tegmark, that the “Nobel” thoroughly excites you. You brandish it, as if it were a deep reality about the universe. But, in truth, the Nobel is strictly nothing for the universe. It’s just a banana offered by a few self-interested apes to other self-fascinated apes. The Nobel has more to do with the nature of apish society, rather than that of the universe. In other words, we ask you about the nature of the universe, and you answer with the Authority Principle among Hominidae. You may as well quote the Qur’an.

Tegmark [unphazed]: There are an enormous number of things that equations do not explain. Consciousness, for example. But I think we will make it. We are just limited by our imagination and our creativity.

La Recherche: According to you, there is no reason that part of the world escape mathematics?

Max Tegmark: None whatsoever. All properties are mathematical! We potentially can understand everything!

La Recherche: As a Platonic mathematician, you consider mathematical concepts are independent of all and any conscious act?

MT: I am an extreme Platonist, as I think that not only mathematical structures are real, but they are all what reality is.

Relativity and Quantum Physics confirmed that reality is always very different from what one believes. Very strange and very different from our intuition. Schrodinger’s equation, the fundamental equation of Quantum Mechanics, shows that a particle can be in several places at the same time. Thus one does not try to describe the motion of this particle, but the probability of its presence in such and such a place.

But, a century later, physicists are still in deep disagreement about what it all means. I think this interpretation keeps dividing people, because they refuse to admit what goes against their intuition.

Tyranosopher: Notice, Max Tegmark, that you presented as a fact (“a particle can be in several places at the same time”) something you admit later is only an “interpretation”. That’s dishonest: an “interpretation” is not a “fact”.

Tegmark [livid]: The strength of mathematics comes from the fact that they have no inhibition. Strangeness does not stop them.

Tyranosopher: Indeed, that’s why, as a trained mathematician, I am very insolent.

La Recherche: Max Tegmark, is it your mathematical approach that makes you defend another controversial idea, that of multiple universes?

Max Tegmark: I really believe that human beings never think big enough. We underestimate our capability to understand the world through mathematics, but also our capacity to apprehend its dimensions. To understand that we live on a planet with a diameter of a bit more than 12,000 kilometers was a first, enormous, step. That this planet is infinitesimal in this galaxy, itself one out of billions, was another enormous step. The idea of multiverses is more of the same. We discover again, and more, that what we understand is only a speck of something much larger. That much larger thing is the Multiverses, of types I, II, III, and IV.

Tyranosopher: La Recherche’s Interview then proceeds further, but let me unleash a fundamental critique here.

I am a deadly enemy of the Multiverse, as I believe that it rests on an ERROR of interpretation of Quantum Physics (the one Tegmark presented as a fact above, before admitting that it was, well, only an interpretation). The fact that it is another desperate scaffolding erected to save the Big bang theory does not make it better.

Now for the notion that the universe being full of math. This is understood to mean that the universe is full of equations. Equations were invented in the Sixteenth Century. Many, if not most, equate mathematics with the art of equating.

What’s an equation? It’s something that says that two things independently defined, one on the left side of the equal sign, the other on the right side, are equal. Great. What could be simpler: what is different is actually the same!

Notice this, though: before you can equate, you must define what you are equating. On both sides.

An equation equates concepts independently defined. Ultimately, definitions are not mathematical (see on the Nature of Mathematics, to follow soon). At best, definition is metamathematical. Our metamathematical universe? End of Mr. Tegmark’s naivety.

When we get down to it, it’s more our philosophical universe, before it’s our mathematical universe: no definitions, no equations.

How can a physicist make such a gross logical mistake? Are they not supposed to be smart? (OK, it’s smart to sell lots of books).

What allows to make that logical mistake? Education, or lack thereof. Many a mathematician will make the same mistake too. The problem is that neither conventional mathematicians, nor, a fortiori, physicists, are trained logicians. They just play some in the media.

Who needs a multiverse? It seems the universe of science is already too large for many physicists to understand.

Patrice Ayme’