Posts Tagged ‘Axioms’

Neurons, Axons, Axioms

March 30, 2015

(Second Part of “Causality Explained”)

Axiomatic Systems Are Fragile:

Frege was one of the founders of mathematical logic and analytic philosophy. Frege wrote the Grundgesetze der Arithmetik [Basic Laws of Arithmetic], in three volumes. He published the first volume in 1894 (paying for it himself). Just before the second volume was going to press, in 1903, a young Bertrand Russell informed Frege of a dangerous contradiction, Russell’s paradox (a variant of the Cretan Liar Paradox). Frege was thrown in total confusion: a remedy he tried to apply reduced the number of objects his system could be applied to, to just ONE. Oops.

Frege was no dummy: he invented quantifiers (Second Order Logic, crucial to all of mathematics). It is just that logic can be pitiless.

If  Those Neurons Evolved Independently From Ours, Neurons Solve Thinking

If Those Neurons Evolved Independently From Ours, Neurons Solve Thinking

Neurons are (part of) the solution to the problem of thinking, a problem so deep, we cannot conceive of it. A second independent evolution of neuronicity would certainly prove that.

Truer Axiomatics Is Simpler, More Powerful:

Russell and Whitehead, colossal mathematicians and philosophers, decided to demonstrate 1 + 1 = 2. Without making “Cretan Liar” self-contradictions.

They wrote a book to do so. In the second volume, around page 200, they succeeded.

I prefer simpler axioms to get to 1 + 1 =2.

(Just define the right hand side with the left.)

It would be interesting that philsophers define what “causing” means, and what “causality” is, for us. Say with explicit examples.

I want to know what cause causes. It’s a bit like pondering what is is.

Some creatures paid as philosophers by employers know 17th century physics, something about billiards balls taught in first year undergraduate physics. (I know it well, I have taught it more than once.) Then they think they know science. All they know is Middle-Ages physics.

These first year undergraduates then to explain the entire world with the nail and hammer they know so well.

They never made it to Statistical Mechanics, Thermodynamics, etc. And the associated “Causality” of these realms of knowledge.

***

Axiomatics Of Causality With The Quantum:

How does “causality” work in the Quantum Mechanics we have?

You consider an experiment, analyze its eigenstates, set-up the corresponding Hilbert space, and then compute.

“Billiard Balls” is what seems to happen when the associated De Broglie wave has such high frequency that the eigenstates seem continuous.

So Classical Mechanical “causality” is an asymptote.

***

Know How To Dream… To Bring Up New Axiomatics

Human beings communicate digitally (words and their letters or ideograms), and through programs (aka languages, including logic and mathematics).

All of this used conventions, “rules”, truths I call axioms, to simplify… the language (this is not traditional, as many of these axioms have had names for 25 centuries).

So for example, I view the “modus ponens” (if P implies Q and P happens, then Q) as an axiom (instead of just a “logical form” or “rule of inference”).

The reason to call basic “logic forms” “axioms” is that they are more fragile than they look. One can do with, or without them. All sorts of non-classical logics do without the “excluded third law” (for example fuzzy set theory).

With such a semantic, one realizes that all great advances in understanding have to do with setting up more appropriate axioms.

***

Buridan’s Revolution, Or An Axiomatics Revolution:

In the Fourteenth Century, the intellectual movement launched by Buridan, included Oresme and the Oxford Calculators. They discovered inertia, momentum (“impetus”), graphs, the law of falling bodies, the heliocentric system (undistinguishable from the geocentric system, said Buridan, but we may as well stick to the latter, as it is in Scripture, said Buridan, wryly).

Buridan’s revolution is little known. But was no accident: Buridan refused to become a theologian, he stuck to the faculty of arts (so Buridan did not have to waste time in sterile debates with god cretins… differently from nearly all intellectuals of the time). Much of Buridan is still in untranslated Medieval Latin, that may explain it, after centuries of Catholic war against him.

http://www.encyclopedia.com/topic/Jean_Buridan.aspx

These breakthroughs were major, and consisted in a number of new axioms (now often attributed to Galileo, Descartes, Newton). The axioms had a tremendous psychological effect. At the time, Buridan, adviser to no less than four Kings, head of the University of Paris, was untouchable.

The philosopher cum mathematician, physicist and politician, died in 1360. In 1473, the pope and king Louis XI conspired to try to stop the blossoming Renaissance.

More than a century after his death, Buridan’s works, his new axioms, were made unlawful to read. (However Buridan was mandatory reading in Cracow, and Copernic re-published the work, as soon as he was safely ensconced within the safety of his death bed).

The mind, the brain, is quite fuzzy (in the sense of fuzzy set theory; the dreaming part; think of dendrites, prominences within synapses, starfish-like astrocytes, neurotransmitters, etc.). Axioms, and axons enable to code it digitally. So mathematization, and programmation are intrinsic human mental activities.

***

We Are All Theoretical Scientists Of The Mathematical Type:

Human beings continually draw consequences from the axioms they have, through the intermediary of giant systems of thought, and systems of mood (mentality for short).

When reality comes to drastically contradict expected consequences, mentality is modified, typically in the easiest way, with what I call an ANTI-IDEA.

For example when a number of physics Nobel laureates (Lenard, Stark) were anxious to rise in the Nazi Party, they had to reconcile the supposed inferiority of the Jews with the fact that Einstein was a Jew. They could not admit either that Poincare’ invented Relativity, as he was also of the most hated nation (and of the most anti-German fascism family in France!).

So they simply claimed that it was all “Jewish Science” (this way they did not have to wax lyrically about why they had collaborated with Einstein before anti-Judaism).

When brute force anti-ideas don’t work after all (as became clear to Germans in 1945), then a full re-organization of the axiomatics is in order.

An example, as I said, is fuzzy set theory. It violates the Excluded-Third Law.

But sometimes the reconsideration may be temporary. (Whether A and Non-A holds in the LOGIC of Quantum Mechanics, the Einstein-Schrodinger Cat, is a matter of heated debate.)

***

 Quantum Logic:,Both In & Out Of This World:

The removal of old logical axioms can be definitive. For example the Distributive Law of Propositional Calculus fails in Quantum Logic. That has to do with the Uncertainty Principle, a wave effect that would be etched in stone, were it not even more fundamental.

http://en.wikipedia.org/wiki/Quantum_logic

***

Verdict? Neurons, Axons, And Axioms Make One System:

We have been playing with axioms for millions of years: they reflect the hierarchical, axon dominated, neuron originated most basic structure of the nervous system.

Why?

Well, the neuronal-axonal skeleton of minds is probably the lowest energy solution to the problem of thinking in the appropriate space. It has just been proposed neurons evolved twice:

https://www.quantamagazine.org/20150325-did-neurons-evolve-twice/

We do not just think axiomatically, but we certainly communicate axiomatically, even with ourselves. And the axiomatics are dynamical. Thus causes learn to fit effects.

The fact this work is subjective, in part, does not mean it does not have to do with nature. Just the opposite: causality is nature answering the call of nature, with a flourish.

Human mentality is a continual dialogue between nature inside (Claude Bernard) and nature outside.

Changing axioms is hard work: it involves brain re-wiring. Not just connecting different neurons, but also probably modifying them inside.

Mathematicians have plenty of occasions to ponder what a proof (thus an explanation) is. The situation is worse than ever, with immense proofs only the author gets (Fermat’s Last Theorem was just an appetizer), or then computer-assisted proofs (nobody can check what happened, and it’s going to get worse with full Quantum Computers).

Not all and any reasoning is made to be understood by everybody. (Mathematicians have to use alien math they don’t really understand, quite often.)

Yes, thinking is hard. And not always nice. But somebody has to do it. Just remember this essence, when trying to communicate with the stars: hard, and not always nice.

Patrice Ayme’

Causality Explained

March 29, 2015

WHAT CAUSES CAUSE?

What Is Causality? What is an Explanation?

Pondering the nature of the concept of explanation is the first step in thinking. So you may say that there is nothing more important, nothing more human.

I have a solution. It is simplicity itself. I go for the obvious model:

Mathematics, logic, physics, and the rest of science give a strict definition of what causality, and an explanation is.

How?

Through systems of axioms and theorems.

Some of the sub-systems therein have to do with logic (“Predicate Calculus”). They are found all over science and common sense (although they will not be necessarily present in systems of thought such as, say, poetry, or rhetoric).

WHEN A IMPLIES B, IN A LOGOS, ONE OUGHT TO SAY THAT A “CAUSES” B.

A and B are propositions. They do not have to be very precise.

Precision Is Not Necessarily The Smartest. Semantic Web Necessary.

Precision Is Not Necessarily The Smartest. Semantic Web Necessary.

As it turns out, except in Classical Computer Science as it exists today (Classical CS by opposition to Quantum CS, a subject developing in the last 20 years), propositions are never precise (so a degree of poetry is everywhere, even in mathematics!) Propositions, in practice, depend upon a semantic web.

A could be: “Plate Tectonic” and B could be “Continental Drift”. That A causes B is one of axioms of present day geophysics.

Thus I define causality as logical implication.

To use David Hume’s example: flame F brings heat H, always, and so is supposed to cause it: F implies H. Hume deduced causality from observation of the link (if…then).

More detailed modern physics shows that the heat of flame F is agitation that can be transmitted (both a theorem about, and a definition of, heat). Now we have a full, detailed logos about F and what H means, and how F implies H, down to electronic orbitals.

Mathematicians are used to make elaborate demonstrations, and then, to their horror, discover somewhere something that cannot be causally justified. Then they have to reconsider from scratch.

Mathematics is all about causality.

“Causes” in mathematics are also called axioms. In practice, well known theorems are used as axioms to implement further mathematical causality. A mathematician using a theorem from a distant field may not be aware of all the subtleties that allow to prove it: he would use distant theorems he does no know the proof of, as axioms. Some mathematician’s, or logician’s axiom is another’s theorem.

(Hence some hostility between mathematicians and logicians, as much of what the former use the latter proved, but the former have no idea how!)

Causality, by the way, reflects the axonal geometry of the brain.

The full logic of the brain is much more complicated than mathematics, let alone Classical Computer Science, have it. Indeed, brain logic involves much more than axons, such as dendrites, neurotransmitters, glial cells, etc. And of these, only axonal geometry is simple enough to be approximated by classical logic… In first order.

Mathematics is causation. And the ultimate explanation. Mathematics makes causation as limpid we can have it.

This theory met with the approval of Philip Thrift (March 27, 2015): “I agree exactly with the words Patrice Ayme wrote — but with “mathematics”→”programming”, “mathematical”→”programmatical”, etc.”

I pointed out later to Philip that Classical Programming was insufficient to embrace full human (and quantum!) logic. He agreed.

However the preceding somehow made Massimo P , a professional philosopher, uneasy. He quoted me:

“Patrice: “To claim that mathematics is not causal is beyond belief. Mathematics is all about causality.”

Massimo: It most obviously isn’t. What’s causal about Fermat’s Last Theorem? Causality implies physicality, and most of pure math has absolutely nothing whatsoever to do with physicality.

Patrice: “Causes” in mathematics are also called axioms.”

Massimo: “You either don’t understand what causality means or what axioms are. Or both.”

Well, once he had released his emotional steam, Massimo, a self-declared specialist of “physicality” [sic] did not offer one iota of logic in support of his wished-for demolition of my… logic. I must admit my simple thesis is not (yet) in textbooks…

Insults are fundamentally poetic, illogical, or pre-logical. Massimo is saying that been totally confused about causality and explanations is a sacred cow of a whole class of philosophers (to whom he had decided he belongs). Being confused about causality started way back.

“All philosophers, “said Bertrand Russell,” imagine that causation is one of the fundamental axioms of science, yet oddly enough, in advanced sciences, the word ’cause’ never occurs … The law of causality, I believe, is a relic of bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm …”

Russell was as wrong as wrong could be (not about the monarchy, but about “causation”). He wrote the preceding in 1913, when Relativity was well implanted, and he, like many others, was no doubt unnerved by it.

Poincare’ noticed, while founding officially “Relativity” in 1904, that apparent succession of events was not absolute (but depended upon relative motions).

Indeed.

But, temporal succession is only an indication of possible causality. In truth causality exists, if, and only if, a logical system establishes it (moreover, said logic has to be “true”; that, assigning a truth value, is, by itself is a separate question that great logicians have studied without clear conclusions).

When an explanation can be fully mathematized, it is finished. Far from being “abstract”, it has become trivial, or so suppose those with minds for whom mathematics is obvious.

Mathematics is just like 2 + 2 = 4, written very large.

Fermat’s Last Theorem is not different in nature, from 2 + 2 = 4… (But for something very subtle: semantic drift, and a forest of theorems used as axioms to go from side of Fermat’s theorem to the other.)

To brandish mathematics as unfathomable “abstract” sorcery, as was done in Scientia Salon, is a strange, but not new, streak.

There in “Abstract Explanations In Science” Massimo and another employed philosopher pondered “whether, and in what sense, mathematical explanations are different from causal / empirical ones.”

My answer is that mathematical, and, more generally logical, explanations are the model of all explanations. We speak (logos) and thus we communicate our thoughts. Even to ourselves.

The difference between mathematics and logic? Mathematics is more poetical. For example, Category Theory is not anchored in logic, nor anywhere else. It is hanging out there, beautiful and useful, a castle in the sky, just like all and any poem.

Such ought to be the set-up on the nature of what causality could be, to figure out what causality is in the physical world. Considering that Quantum Entanglement is all over nature, this is not going to be easy (and it may contain a hidden clock).

Patrice Ayme’

Axiom of Choice: Crazy Math

March 30, 2014

A way to improve thinking is to imagine more, and be more rigorous. What a better place to exert these skills than in mathematics and logic? Things are clearer there.

The crucial Axiom Of Choice (AC) in mathematics has crazy consequences. After describing what it is, and evoking some of its insufferable consequences, I will expose why it ought to be rejected, and why the lack of a similar rejection, at the time, in a somewhat similar situation, may have help in the decay of Greco-Roman antiquity.

This is part of my general, Non-Aristotelian campaign against infinity in mathematics and beyond. The nature of mathematics, long pondered, is touched upon. A 25 centuries old “proof” is mauled, and not just because it’s fun. There is deep philosophy behind. Call it the philosophy of sustainability, or of finite energy.

Intolerably Crazy Math From Axiom of Choice

Intolerably Crazy Math From Axiom of Choice

The Axiom of Choice makes you believe you can multiply not just wine, fish and bread, but space itself: AC corresponds, one can say, to a wasteful mentality.

The Axiom of Choice says that, given a collection C of subsets inside a set S, one can consider that a set exists, made of elements, each one of them is an element in exactly one of the subsets. That sounds innocuous enough, and obvious. And obvious it is, if one thinks of finite sets. However, if C is infinite, it gets boringly complicated.

Moreover, AC has a consequence: given a unit sphere, one can cut it in disjoint pieces, and reassemble those pieces to build two unit spheres. Banach and Tarski, both Polish mathematicians working in what’s now Western Ukraine, the object of Putin’s envy and greed, demonstrated this Banach-Tarski paradox. It’s viewed as an object of wonder in General Topology.

I prefer to view it as an object of horror. (The pieces are not Lebesgue measurable, that means not physical objects. Such non measurable objects had been found earlier by Vitali and Hausdorff)

Punch line? The Axiom Of Choice (AC) is central to all of modern mathematics. Position of conventional mathematicians? The fact that AC is so useful, all over mathematics, proves that AC can be fruitfully considered to be true.

My retort? Maybe what you view as fruitful mathematics is just resting on a false axiom, or, at least one against nature, and thus, is just plain false, or against nature. One may be better off, studying mathematics that is not against nature..

As I showed earlier, calculus survives the outlawing of infinity in mathematics. That pretty much means that useful mathematics survives.

You see a problem with mathematics, even the simplest arithmetic, is that, once one has admitted the infinity postulate, thanks to the Cantor Diagonal process, one can always find undecidable propositions (this is part of the Incompleteness Theorems of mathematical logic: Gödel, etc.).

That means a field such as Euclidean geometry is infinite, in the sense that it has an infinite number of non-provable theorems. Each can be decided both ways: false, or true. Each gives rise to two mathematics.

Yet, even modern mathematicians will admit that studying Euclidean geometry for an infinite amount of time is of little interest. Proof? They don’t do it.

Yet, what’s the difference with what they are doing?

Mathematics is neurology, and neurology can be anything, but infinite. Think about what it means. Yes, mathematics is even cephalopod neurology, with the octopus’ nine brains. Fractals, for example, are part of math, but far from the tradition of equating angles or algebraic expressions.

It’s a big universe out there. The number one consequence to draw from the history of science, is that scientists make tribes. Quite often those tribes go astray… for more than 1,000 years (see notes). Worse: my making science, and, or mathematics, uninteresting, they may lead to a weakening of public intelligence.

I would suggest that effect, making science, and mathematics priestly and narrow minded, contributed to the powerful anti-intellectual tsunami that struck the Roman empire.

Greek mathematicians had excluded all mathematics as unworthy of consideration, but for a strict subset of “Euclid’s Elements” (some of the present Euclid Elements were added later). The implementation of those discoveries were made by others (Indians, and to some extent, Iranians and Arabs).

It turned out that these more practical mathematics, excluded by Euclid, because they were viewed as non rigorous and primitive, led to deeper and more powerful insights.

The irony was that Euclid’s Elements, in the guise of rigor, were using an axiom that was not needed, in general, the parallel axiom. That axiom, by supposing too much, killed the imagination.

I suggest nothing less happening nowadays, with the Axiom of Choice: it’s one axiom too far.

Patrice Aymé

Technical notes:

Up to a recent time, if one was not a Supersymmetric (SUSY) physicist, it was impossible to find a job, except as a taxi cab driver. There was a practical axiom ruling physics: the world had got to be supersymmetric.

Now the whole SUSY business seems to be imploding as the CERN’s LHC came up empty, and it dawned on participants that there was no reason for an experimental confrontation in the imaginable future… I have studied SUSY, and I have a competitive theory, where there are two hints of experimental proofs imaginable (namely Dark Energy and Dark Matter).

I said the AC was one axiom too far, but actually I think infinity itself is an axiom too far. I exposed earlier what’s wrong with the 25 centuries old proof of infinity (it assumes one can use a symbol one cannot actually evoke, because there is no energy to do so!).

The geocentric astronomy ruled from Aristarchus of Samos (who proposed the heliocentric system, 3C BCE) until Buridan (who used inertia, that he had discovered to make the heliocentric system more reasonable; ~1320 CE; Copernic learned Buridan in Cracow, Poland). It could be viewed as an axiom.

Hidden axioms are found even in arithmetic, for example the Archimedean Axiom was used by all mathematicians implicitly, before Model Theory logicians detected it around 1950 (it says, given two integers, A and B, a third one can be found, D, such that: AD > B; if not fulfilled one gets non-standard integers).