Posts Tagged ‘Logic’

What Is A Logic? Just A Piece Of Mind

January 15, 2017

I would propose that a logic is anything which can be modelled with a piece and parcel of brain.

I will show, surprisingly enough, that this is a further step in Cartesian Logic.

At first sight, it may look as if I were answering a riddle, by further mysteries. Indeed, but with mysteries which can be subjected to experimental inquiry (now or tomorrow).

What is a brain? A type of Quantum Computer! And what is Computing, and the Quantum? Well, works in progress. There is something called Quantum Logic, but it does not necessarily defines the world, as exactly what Quantum Physics is, is still obscure.

In practice? Logic is what works, a set of rules to go from a set A of statements to a set B of statements.

In this perspective, Medieval logic did not decline. Instead it transmutated into mathematics.

 The teaching of Logic or Dialetics from a collection of scientific, philosophical and poetic writings, French, 13th century; Bibliotheque Sainte-Genevieve, Paris, France. The 13th century was a time of extreme intellectual activity in Europe, superior to anything else in the world, centered 800 miles around Paris. In particular the heliocentric system was proposed by Buridan, after he overthrew Aristotelian Physics, by inventing and discovering inertia.

The teaching of Logic or Dialetics from a collection of scientific, philosophical and poetic writings, French, 13th century; Bibliotheque Sainte-Genevieve, Paris, France. The 13th century was a time of extreme intellectual activity in Europe, superior to anything else in the world, centered 800 miles around Paris. In particular the heliocentric system was proposed by Buridan, after he overthrew Aristotelian Physics, by inventing and discovering inertia.

An article in Aeon, “The Rise And Fall And Rise Of Logic”,

https://aeon.co/essays/the-rise-and-fall-and-rise-of-logic

Reflects on the importance on the history of the notion of logic:

Reflecting on the history of logic forces us to reflect on what it means to be a reasonable cognitive agent, to think properly. Is it to engage in discussions with others? Is it to think for ourselves? Is it to perform calculations?

In the Critique of Pure Reason (1781), Immanuel Kant stated that no progress in logic had been made since Aristotle. He therefore concludes that the logic of his time had reached the point of completion. There was no more work to be done. Two hundred years later, after the astonishing developments in the 19th and 20th centuries, with the mathematisation of logic at the hands of thinkers such as George Boole, Gottlob Frege, Bertrand Russell, Alfred Tarski and Kurt Gödel, it’s clear that Kant was dead wrong. But he was also wrong in thinking that there had been no progress since Aristotle up to his time. According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries. (Throughout this piece, the focus is on the logical traditions that emerged against the background of ancient Greek logic. So Indian and Chinese logic are not included, but medieval Arabic logic is.)”

The old racist Prussian, Kant, a fascist, enslaving cog in the imperial machine turned false philosopher was unsurprisingly incorrect.

The author of the referenced article, Catarina Dutilh Novaes, is professor of philosophy and the Rosalind Franklin fellow in the Department of Theoretical Philosophy at the University of Groningen in the Netherlands. Her work focuses on the philosophy of logic and mathematics, and she is broadly interested in philosophy of mind and science. Her latest book is The Cambridge Companion to Medieval Logic (2016).

She attributes the decline of logic, in the post-medieval period known as the Renaissance and the Enlightenment, to the rise of printed books, self-study and the independent thinker. She rolls out Descartes, and his break from formal logic:

Catarina writes: “Another reason logic gradually lost its prominence in the modern period was the abandonment of predominantly dialectical modes of intellectual enquiry. A passage by René Descartes – yes, the fellow who built a whole philosophical system while sitting on his own by the fireplace in a dressing gown – represents this shift in a particularly poignant way.”

Speaking of how the education of a young pupil should proceed, in Principles of Philosophy (1644) René Descartes writes:

After that, he should study logic. I do not mean the logic of the Schools, for this is strictly speaking nothing but a dialectic which teaches ways of expounding to others what one already knows or even of holding forth without judgment about things one does not know. Such logic corrupts good sense rather than increasing it. I mean instead the kind of logic which teaches us to direct our reason with a view to discovering the truths of which we are ignorant.

Catarina adds: “Descartes hits the nail on the head when he claims that the logic of the Schools (scholastic logic) is not really a logic of discovery. Its chief purpose is justification and exposition.”

Instead, Descartes claims and I claim that a new sort of logic arose: Medieval Logic transmuted itself into mathematics (Descartes does not say this, but he means it). And mathematics is not really logical in the strictest sense. As it has too many rules to be strictly logical.

Buridan, a great logician who studied well the Liar Paradox (which gave the Incompleteness Theorems) had students such as (bishop) Oresme, who demonstrated what, it turned out, were the first practical theorems in calculus (more than 2 centuries before the formal invention of calculus by Fermat, and Fermat’s discovery of the Fundamental Theorem of Calculus, that integration and differentiation are inverse to each other).

For example, under the influence of Buridan and then Oresme, graphs and later equations themselves were invented. So logic became mathematics. That was blatant by the time Descartes invented Algebraic Geometry. Algebraic Geometry gave ways to deduce, to go from a set A to a set B, using a completely new method never seen before.

In turn, by the Nineteenth Century, mathematical methods contributed to old questions in Logic (the most striking being the use of Cantor Diagonalization to show incompleteness, thanks to the Liar Paradox, self-referential method.

In this spirit, not only Set Theory, naive or not, but Category Theory can be viewed as types of logic. So is, of course, computer science. Logic is whatever enables to deduce. Thus even poetry is a form of logic.

Logic is everywhere there is mental activity, and it is never complete.

If logic is just pieces of brain, then what? Well, some progress in pure logic can be made, just paying attention to how the brain works. The brain works sequentially, temporally, with local linear logics (axonal and dendritic systems). The brain tends to be deprived of contradictions (but not always, and nothing infuriates people more, than to be exposed to their own contradictions and gaps in… logic). Also all these pieces of brain, these logics, are not just temporally ordered, but finite.

As we try to use logic to look forward, as a bunch of monkeys messing up our space rock, it is important to realize that what logic is, has not been properly defined, let alone circumscribed. Indeed, if, surprise, surprise, logic has not been properly defined, let alone circumscribed, much more is logically possible than people suspect!

Patrice Ayme’

 

Advertisements

Mentality Trumps Logic

November 30, 2016

Mental States Trump (Local Linear) Logic

TRUMP MADNESS MENTALLY ENLIGHTENING, thank you, all of you, clueless fanatics, for providing us with not just entertainment, but insights on how insects think.

How do people think? When thinking about thinking, intellectuals tend to go back to Plato describing the mythical Socrates ponderously going from a) to c) because a) implied b) and b) implied c). Well, this is NOT how the brain works. The brain has basically two systems: Local Linear Logic, and Topological Logic (TL = emotion, so we will call it ES, the Emotional System). LLL and ES are entangled. For example, ES, the Emotion System, shuts off, and opens, various sub-systems in the brain. Moreover the ES directs consciousness into these subsystems. Each of these systems comes with its own logic. So there is no such a thing as “logic” per se. 

Actually modern axiomatics in logic considers that any Logic L comes with its own Universe U (in which it sits, so to speak). Varying U varies L. Thus a Logic L in the brain, sitting in subsystem S1 will be different from one sitting in subsystem S2, because they constitute different universes U. (An aspect of that was long known, as thinkers argued that various drugs, from alcohol to THC enabled them to reach various stages of consciousness…)

Thus what Plato talked about is basically irrelevant to foster wisdom. What is relevant is mental subsystems selection, how, and why. And even subsystem management. Instead, Plato explores logic, LLL. And recent events have been enlightening: LLL is mostly secondary for directing people’s behavior. 

I think, Therefore I sting. At Least, Sometimes, I Feel That Way.

I Think, Therefore I Sting. At Least, Sometimes, I Feel That Way.

By “Trump Madness” I do not mean Trump is mad, far from it: after all, he is the next president, and already causing more change than Obama did in 8 years (see Europe dumping “austerity” within 30 hours of Trump’s election). Clearly, there was a very smart method to Trump’s madness, and it was highly successful for him, as he obtained the loftiest job in the world (at least as far as conventional wisdom has it; in truth the loftiest job is mine, but never mind…). Thus “Trump madness” was anything except madness, on the part of Trump… Or his supporters (who also got what they wanted).

The real madness has been the flow of insults and indiscriminate violence on the part of “Clinton” supporters. Innocent thinkers were called “unscholarly, uncouth, anti-semitic, racist, xenophobic, judged to have Obsessive Compulsive Disorder,  and compulsive liars”. This was just a sampler of the most polite insults directed at me… by “friends”… and I am NOT a Trump supporter. Just thought to be so, because I rolled out all sorts of graphs depicting demoncracy (from inequality, to incarceration, rate, to life expectancy, to government investment, etc.).

Never mind that this was all for positions I held sometimes for decades, they are all extremely progressive, and I am just culprit of having Trump embracing them.

Insults directed at Trump were often obviously more insane than grievous. Trump was called “xenophobic” (the evidence is, the exact opposite, that is, Trump is an extreme xenoPHILE). Trump was called “anti-semitic” (his beloved and trusted son-in-law is an observant Jew). Trump was called a business failure (he grew his “organization”, now in 60 countries, from 17 million dollar to somewhere around ten billion…)

How come Clinton supporters became so abusive? OK, they were surprised. Not just because people were scared to reveal in the polls that they would vote for Trump, skewing polls (pollster Nate Silver discovered this a week or two before the vote, so he “unskewed” the polls, and revealed the chances of Trump were significant; I knew for months, just talking to people, that people were hiding their Trump preferences).

Clinton supporters did not turn abusive and insulting just because what they worry about turns out not to be what most of the country worries about. But, mostly, they hated, because it turned out that they had become strangers to themselves, and the world. Part of them rose in fury, and took over their persona, because they wanted to lash out, so great was the pain that uncomprehension caused..

The Clinton supporters had no idea how neurohormonally entangled with (their idea of) their candidate. Precisely because they were deliberately ignored the (left, leftist, liberal, progressive) case I have made for more than eight years (with all those graphs), they had turned into fanatics, Jihadists, because they had rejected (the unsavory) reality.

The mental order in the brains of these self-described progressives, supposed to address politics, had become hopelessly disconnected from reality. For example, in judging Obama, they judged his brown skin, but not the fact Obama was led by the nose by Lawrence Summers, the Harvard-Goldman Sachs surrogate who had dismantled, under Bill Clinton, the Banking Act of 1933 (“G-S”). And this, seven months before Obama reigned. And they ignored hundreds of other indicators which were flashing way more right, and corporate fascism, than any other president before.

Thus the mental subsystems Clinton supporters activated over the years made them not just unreal, but incapable of activating anything else. One of my prefered game these days is to question Clinton-Obama fanatics about Quantitative Easing. I generally draw a blank. The self-perceived) most clever ones tell me it was a good thing. So here you have so-called progressives saying that giving more than ten trillion dollars to the world richest, most corrupt people and institutions was… a good thing.

Guess what, you dummies? It was a good thing only for plutocracy, also known as demoncracy. The only person who could understand what I was talking about, and agreed with me, before meeting me, is Senior VP in a major bank.

People think first with their neurohormones. Tell me their neurohormones most active, and I can tell you where their Local Linear Logic delves. Obsessions leads and localizes reflection.

Is there experimental evidence for the preceding? Yes, there is, from… insects. The theory of consciousness is starting to rise. It involves making flies play videogames, or seeing if, like American students, they can get scared. Flies can be put in a state of “scariness” and wanting to get to a “safe space”.

Insects have a rudimentary ego, though very different from Narcissus or classical literature would have it. Insect ego appears as the ability to act and mentally concentrate on certain environmental cues thus ignoring others. “They don’t pay attention to all sensory input equally,” cognitive scientist Andrew Barron of Australia’s Macquarie University declared.

When you and I are hungry, we don’t just move towards food, as bacteria do. Our hunger creates a particular feeling (an emotion) which, in turn rearrange which subsystems are activated in our brain. Such a state is called a “subjective experience” in traditional philosophy. Do insects have the same? Obviously they do (I can say from anecdotes, and thus as a philosopher; scientists will verify and make sure).

Insects can be led into mental states which do not fit reality. So can humans (humans even do this deliberately, when they play or make jokes). Once in such a state, a particular logic, the universe of which is that precise mental state, flows. That Local Linear Logic is particular, yet it leaves (neural) connections behind. If suddenly precipitated, for real, in a situation calling for that mental state, the LLL is ready to kick in. That’s why humans play, and make jokes.

This election was a joke. So were the mental states most citizens put themselves, or let themselves been put, in the last few decades. Time to wake up.

And time to wake up to the reality that it is moods which create logic, even more than it is logic which creates moods.

Patrice Ayme’

Happy In the Sky With New Logics: Einstein’s Error II

August 6, 2016

Einstein assumed reality was localized and definite in one of his famous 1905 papers, and physics never recovered from that ridiculous, out-of-the-blue, wanton, gratuitous error. (The present essay complements the preceding one found in the link). 

At the origin of Quantum Mechanics is Max Planck’s train of thought. Max demonstrated that supposing that electromagnetic energy was EMITTED as packets of energy hf explained the two obvious problems of physics; h is a constant (since then named after Planck), f is the frequency of the light.

Then came, five years later, Einstein. He explained the photoelectric effect’s mysterious features by reciprocating Planck’s picture: light’s energy was RECEIVED as packets of energy hf. Fine.   

However, so doing Einstein claimed that light, LIGHT IN TRANSIT, was made of “LICHT QUANTEN” (quanta of light), which he described as localized. He had absolutely no proof of that. Centuries of observation stood against it. And the photoelectric effect did not necessitate this grainy feature in flight, so did not justify it.  

Thus Einstein introduced the assumption that the ultimate description of nature was that of grains of mass-energy. That was, in a way, nothing new, but the old hypothesis of the Ancient Greeks, the atomic theory. So one could call this the Greco-Einstein hypothesis. The following experiment, conducted in 1921, demonstrated Einstein was wrong. Thus the perpetrator Walther Gerlach, did not get the Nobel, and the Nobel Committee never mentioned the importance of the experiment. Arguably, Gerlach’s experiment was more important than any work of Einstein, thus deserved punishment The Jewish Stern, an assistant of Einstein, got the Nobel alone in 1944, when Sweden was anxious to make friends with the winning “United Nations”: 

Two Points. The Classical Prediction Is A Vertical Smear. It Is Also Einstein’s Prediction. And Incomprehensible In Einstein’s View Of The World.

Two Points. The Classical Prediction Is A Vertical Smear. It Is Also Einstein’s Prediction. And That Smear Is Incomprehensible In Einstein’s View Of The World.

Yet, Einstein’s advocacy of nature as made of grains was obviously wrong: since the seventeenth century, it was known that there were wave effects ruling matter (diffraction, refraction, Newton’s rings). That was so true, Huyghens proposed light was made of waves. Around 1800 CE Young and Ampere proposed proofs of wave nature (2 slit experiment and Poisson’s dot). The final proof of the wave theory was Maxwell’s completion and synthesis of electromagnetism which showed light was an electromagnetic wave (travelling at always the same speed, c).

Einstein’s hypothesis of light as made of grain is fundamentally incompatible with the wave theory. The wave theory was invented precisely to explain DELOCALIZATION. A grain’s definition is the exact opposite.

There is worse.

Spin was discovered as an experimental fact in the 1920s. Interestingly it had been discovered mathematically by the French Alpine mathematician Elie Cartan before World War One, and stumbled upon by Dirac’s invention of the eponymous equation.  

The simplest case is the spin of an electron. What is it? When an electron is put in a magnetic field M, it deviates either along the direction of M (call it M!) or the opposite direction (-M). This sounds innocuous enough, until one realizes that it is the OBSERVER who selects the direction “M” of M. Also there are two angles of deviation only. (The Gerlach experiment was realized with silver (Ag) atoms, but the deviation was caused by a single electron therein.)

Einstein would have us believe that the electron is a grain. Call it G. Then G would have itself its own spin. A rotating charged particle G generates a magnetic field. Call it m. If Einstein were correct, as the direction of M varies, its interaction between the grain G magnetic field m will vary. But it’s not the case: it is as if m did not count. At all. Does not count, at all, whatsoever. It’s all about M, the direction of M.

So Einstein was wrong: there is no grain G with an independent existence, an independent magnetic filed m.

Bohr was right: Einstein was, obviously, wrong. That does not mean that Bohr and his followers, who proclaimed the “Copenhagen Interpretation” were right on other issues. Just like Einstein hypothesized something he did not need, so did the Copenhagists.

Backtrack above: M is determined by the observer, I said (so bleated the Copenhagen herd). However, although M can changed by an observer, clearly an observer is NOT necessary to create a magnetic field M and its direction.

Overlooking that blatant fact, that not all magnetic fields are created by observers, is the source of Copenhagen confusion.

We saw above that correct philosophical analysis is crucial to physics. Computations are also crucial, but less so: a correct computation giving correct results can be made from false hypotheses (the paradigm here is epicycle theory: false axiomatics, the Sun did not turn around the Earth, yet, roughly correct computations produced what was observed).

Out of Quantum Theory came Quantum ElectroDynamics (QED), and, from there, Quantum Field Theory (QFT).  

QED is one of the most precise scientific theory ever. However, there is much more precise: the mass of the photon is determined to be no more than 10^(-60) kilogram (by looking at whether the electromagnetic field of Jupiter decreases in 1/d^2…).

Nevertheless, QED is also clearly the most erroneous physical theory ever (by an order of 10^60). Indeed, it predicts, or rather uses, the obviously false hypothesis that there is some finite energy at each point of space. Ironically enough, it is Einstein and Stern (see above) who introduced the notion of “zero point energy” (so, when Einstein later could not understand, or refused to understand, Quantum Electrodynamics, it was not because all the weirdest concepts therein were not of his own making…)

The debate on the Foundations of Quantum Physics is strong among experts, all over the map, and permeated with philosophy. Thus don’t listen to those who scoff about whether philosophy is not the master of science: it always has been, it is frantically so, and always will be. It is a question of method: the philosophical method uses anything to construct a logic. The scientific method can be used only when one knows roughly what one is talking about. Otherwise, as in Zeroth Century, or Twentieth Century physics, one can go on imaginary wild goose chases.

From my point of view, Dark Matter itself is a consequence of the True Quantum Physics. This means that experiments could be devised to test it. The belief that some scientific theory is likely incites beholders to make experiments to test it. Absent the belief, there would be no will, hence no financing. Testing for gravitational waves was long viewed as a wild goose chase. However, the Federal government of the USA invested more than one billion dollars in the experimental field of gravitational wave detection, half a century after an early pioneer (who was made fun of). It worked, in the end, splendidly: several Black Hole (-like) events were detected, and their nature was unexpected, bringing new fundamental questions.

Some will say that all this thinking, at the edges of physics and philosophy is irrelevant to their lives, now. Maybe they cannot understand the following. Society can ether put its resources in making the rich richer, more powerful and domineering. Or society can pursue higher pursuits, such as understanding more complex issues. If nothing else, the higher technology involved will bring new technology which nothing else will bring (the Internet was developed by CERN physicists).

Moreover, such results change the nature not just of what we believe reality to be, but also of the logic we have developed to analyze it. Even if interest in all the rest faded away, the newly found diamonds of more sophisticated, revolutionary logics would not fade away.

Patrice Ayme’

 

HUMANITY: A SINGULARITY

March 30, 2016

HUMANITY: Not Just Singular.

The Human Condition: never so contingent, that we are forever prisoner of it.

Some worry about, others anticipate “the singularity”, when human technology exponentiates for all to see. Many expect that it is coming soon. But “the singularity” has long been around. We have been living through it, ever since human evolved, and they started to think in all-devouring ways, a couple of million years ago. The invention of mass transmissible global culture, hundreds of thousands of years ago, launched us towards the stars.

What’s science for? Aside from keeping our world up in the air? Science is what is known for sure. A (good) mom knows to love for sure. So, alright, there is a science of love. Human beings, as are many animals, all the way down to insects, are naturally equipped not just to know, but to understand what they needed to understand to survive as a species.

Art: Beautiful & Interesting, In Part Because It Reveals Logic Never before Suspected, Or Because It Reminds Us Of Them. When God Is Shows His Consideration For Us, We Are Made Into The Crown Of Creation & The Transmission of Power, Knowledge Are Impressed Upon Us As The Most Important Activities Worthy Of The Gods We Are

Art: Beautiful & Interesting, In Part Because It Reveals Logic Never before Suspected, Or Because It Reminds Us Of Them. When God Is Shows His Consideration For Us, We Are Made Into The Crown Of Creation & The Transmission of Power, Knowledge Are Impressed Upon Us As The Most Important Activities Worthy Of The Gods We Are

Orphaned earwigs, a type of insect, are at a disadvantage and exhibit reduced maternal skills. Beauty and love are everywhere. Transmission of knowledge is what brains do.

Humanity itself is a singularity of mind the evolution of creation  (“life”) has blossomed into. Our technologies (“specialized discourses”) expanded over all of Earth, and expanded what Earth could be for us, as Earth itself became our province. And this physical empire did so because our minds expanded. Earth became an empire of reason (in particular, very hot, crazy reasons). Our minds expanded so much that they not just revealed, but caused, accordingly, new, spectacular problems… which we presently enjoy ever more.

Science is not just a knowledge of facts, but a knowledge of beauty which would not have been otherwise revealed. Science is also the uncovering of logic of previously unsuspected subtlety, for all to see. Thus subjects as esoteric as how exactly supernovae explode can reveal how explanations can go about things.

Beauty itself, is partly a matter of logic: when god touches humanity with its finger in Michelangelo’s famous painting, the beauty depicts a logic, and it is what makes it, in part, beautiful.

On March 21, 2016, NASA and its (crippled, but reconfigured for doing other things) Kepler telescope, revealed the visualization of the explosions of some supernovae.

Knowing about supernovae is not just knowing how the chemistry which made Earth possible was created. It is not just about knowing the size of the universe, and how fast it is changing. It is also knowing about analogies, metaphors, logics and possibilities we never suspected, and also about our naivety, to never have suspected they were.

Our knowledge of facts and logics, and of the beauty and possibilities they entail, are indeed exponentiating (or more). They long have been, it’s our definition.

However, it is now clear that, within a decade or so, most work will be taken over by machines, even work of a creative nature. The human work will have to be the all too human crazy creativity of geniuses, art never thought of before. It is already the case: the economic success of a city such as San Francisco is now mostly from the creation of so-called “apps” which are little computer programs for doing little things one never thought of, or could do, before (or assemblies thereof, such as the notorious “Uber” with its astounding 65 billion dollars market cap).

And we can’t turn back, because it’s what we are: ”Plus Oultre!”. As Charles Quint put it in his native French: Plus Ultra (Motto Of Spain), Plus Meta,

And the biosphere could not handle a U-turn on humanity’s part, weirdly enough. Actually, handling the CO2 crisis will have to be done the old fashion way: through brand new technological means, the exact way in which life already deals with it (after all, plants use CO2 to build themselves with it… although CO2 is notoriously chemically inert, thus nearly useless industrially… so far. But, philosophically speaking, there is no reason why human science can duplicate with CO2 what 4 billion years of molecular biology succeeded to achieve.)

Some may object that the preceding, however alluring, is a sort of sing-song, Patrice perched on a branch, singing to the stars, human, all too human (thus not really marketable).

However, not so. The poorly parented earwigs, these modest insects with big pincers for a tail, are themselves poor parents, because they have undergone genetic changes. They are victims, or, more exactly, creations of epigenetics, what controls genetic itself (and which Lamarck had anticipated). We, the genus Homo, have gone through 100,000 generations of intense selection according to our capability to outsmart our opponents (other, less performing humans, mostly). So we are the mental species, in all sense of the term: crazily creative, and creatively crazy.

This is not just a song, it’s a reality. Our reality. Not just our ethics, but our esthetics are dependent upon the sum of all our evolutionary histories, our genetics and epigenetics, and those progress to ever more understanding, and mastery of the universe, therefrom.

Thus, all speed ahead, we are special, we may as well admit that violating old norms and creating new ones is not just what we do, but what we are.

Patrice Ayme’   

Logic Is Not Everything: It Can Be Anything

February 24, 2016

A common mistake among many of the simple ones, is that, as long as we keep calm and use logic, we can solve any conflict. It was understandable that one could do such a mistake due to naivety and inexperience, before the Twentieth Century. However, we have now, black on white and well known, demonstrations to the contrary, in formal systems studied by professional logicians. Besides, as The French Republic is demonstrating in Libya again, in collaboration with the USA, war has a logic which squashable critters don’t have.

Yes, I am also thinking of the famous Incompleteness Theorems, but, obviously, not only. There is way worse.

This Means All Important Choice Have to Do With Love, Esthetics, Will, Power, Craziness, The Proverbial Human Factors. Logic Can't Go Where The Heart Rules. Or Then Go Into METAlogic.

This Means All Important Choice Have to Do With Love, Esthetics, Will, Power, Craziness, The Proverbial Human Factors. Logic Can’t Go Where The Heart Rules. Or Then Go Into METAlogic.

Before I get in incompleteness and further evils, let me recap some of the traditional approach. I thank in passing Massimo P, for calling my attention to this.

The value of logic, February 23, 2016, Massimo

logicThis is going to be short and rather self-explanatory, with no additional commentary on my part necessary at all. Here is the full transcription of Epictetus’ Discourses, II.25, a gem to keep in mind for future use:

“When one of his audience said, ‘Convince me that logic is useful,’ he said, Would you have me demonstrate it?

‘Yes.’

Well, then, must I not use a demonstrative argument? And, when the other agreed, he said, How then shall you know if I impose upon you?

And when the man had no answer, he said, You see how you yourself admit that logic is necessary, if without it you are not even able to learn this much–whether it is necessary or not.”

Actually Epictetus uses “logic”, it seems to me, rather in its original sense, a discourse. Yet Massimo, like the moderns, will tend to use logic as it was meant in, say, 1900, just before Bertrand Russell objected to Frege’s forgetfulness of the “Liar Paradox” in his formal system justifying arithmetic. A further exploitation of the Liar Paradox brought the incompleteness theorems.

Logic is indeed how human beings communicate. Logic enables debate, and debate is the equivalent of sex, among ideas. It generates entirely new species. However post Gottlob Frege and Bertrand Russell, logic has progressed much.

Modern studies in logic show formal logic can be pretty much anything. Formal systems contradicting the most cherished axioms have been found to be consistent. Some have cute names, such as “paradoxical logic”, “fuzzy logic”, “linear logic”. Thus, Logical Pluralism has been discovered. “Classical Logic” (which neither complete nor coherent) is just a particular case. In some logics, a proposition can be neither true, nor false.

Along the line of ones of the greatest logician and mathematicians of the 20th Century, Generalized Tarski Thesis (GTT):
An argument is valid if and only if in every case in which the premises are true, so is the conclusion.
So yes, madness can be logical. “Logic”, per se, is not much of a constraint. It’s only a set of coherent rules to draw a conclusion. The only constraint is to keep on talking.
The Ancient Greeks would have been very surprised.

So if logic is not the end-all, be all, what is?

Knowledge. Knowledge of the details. In other words, knowledge of evil. That is why, when I demonstrate, using knowledge, that Marcus Aurelius, supposedly the big time Stoic philosopher, makes the apology of Intellectual Fascism, I hear the cries of the Beotians, whose pathetic logic have crushed underfoot. What happened? I went outside of their logic. So they insult me. (Should I spurn them, and make them feel that I have nothing to say, or insult them back, by showing them, and others, what idiots they are? Sitting on one’s hand in front of rabid fascism is neither wise, nor safe!)

In other news, French special forces are operating on the ground in Libya, helping, among other things, very precise US strikes. This is a case of using the same logic as the enemy. The Islamists terrorize and kill: a logic which is pretty drastic. It can also be adopted. Let see how it goes, when the country with the greatest, longest military tradition, adopts it too. (France has a long history of drastic war against invading Islamists, since the Battle of Toulouse, in 721 CE)

Speaking of the enemy, the French Internal Revenue Service is forcing Google Inc., the famous monopoly. to pay back taxes. Google has sent the clown it uses as CEO to Paris. Google was transferring profits it made in France through Ireland, and then Bermuda. The bill? 1.6 BILLION Euros. Great Britain has a similar situation and economy, but is asking for only a tenth of that. Such is plutocracy: greater in the UK than in France.

Ah, the French Defense Ministry is not denying media reports that the French army is in combat, on the ground, in Libya. Instead, the French government has announced the start of an inquiry for finding out who compromised National Defense. In other words, it wants it to be known. Or there is much more coming, which it wants to hide. Or then American paranoia is contagious.

In other news, Belgium, historically a part of Gaul, has closed its border with France, as if Belgium were Great Britain, and France, full of Huns (instead of only Afghans). Amusingly, and a testimony of how much old gripes have subsidized, the Franco-German border stays open.

Logic is not all. Facts are much more important, including facts on the ground.

Patrice Ayme’

Causality Explained

March 29, 2015

WHAT CAUSES CAUSE?

What Is Causality? What is an Explanation?

Pondering the nature of the concept of explanation is the first step in thinking. So you may say that there is nothing more important, nothing more human.

I have a solution. It is simplicity itself. I go for the obvious model:

Mathematics, logic, physics, and the rest of science give a strict definition of what causality, and an explanation is.

How?

Through systems of axioms and theorems.

Some of the sub-systems therein have to do with logic (“Predicate Calculus”). They are found all over science and common sense (although they will not be necessarily present in systems of thought such as, say, poetry, or rhetoric).

WHEN A IMPLIES B, IN A LOGOS, ONE OUGHT TO SAY THAT A “CAUSES” B.

A and B are propositions. They do not have to be very precise.

Precision Is Not Necessarily The Smartest. Semantic Web Necessary.

Precision Is Not Necessarily The Smartest. Semantic Web Necessary.

As it turns out, except in Classical Computer Science as it exists today (Classical CS by opposition to Quantum CS, a subject developing in the last 20 years), propositions are never precise (so a degree of poetry is everywhere, even in mathematics!) Propositions, in practice, depend upon a semantic web.

A could be: “Plate Tectonic” and B could be “Continental Drift”. That A causes B is one of axioms of present day geophysics.

Thus I define causality as logical implication.

To use David Hume’s example: flame F brings heat H, always, and so is supposed to cause it: F implies H. Hume deduced causality from observation of the link (if…then).

More detailed modern physics shows that the heat of flame F is agitation that can be transmitted (both a theorem about, and a definition of, heat). Now we have a full, detailed logos about F and what H means, and how F implies H, down to electronic orbitals.

Mathematicians are used to make elaborate demonstrations, and then, to their horror, discover somewhere something that cannot be causally justified. Then they have to reconsider from scratch.

Mathematics is all about causality.

“Causes” in mathematics are also called axioms. In practice, well known theorems are used as axioms to implement further mathematical causality. A mathematician using a theorem from a distant field may not be aware of all the subtleties that allow to prove it: he would use distant theorems he does no know the proof of, as axioms. Some mathematician’s, or logician’s axiom is another’s theorem.

(Hence some hostility between mathematicians and logicians, as much of what the former use the latter proved, but the former have no idea how!)

Causality, by the way, reflects the axonal geometry of the brain.

The full logic of the brain is much more complicated than mathematics, let alone Classical Computer Science, have it. Indeed, brain logic involves much more than axons, such as dendrites, neurotransmitters, glial cells, etc. And of these, only axonal geometry is simple enough to be approximated by classical logic… In first order.

Mathematics is causation. And the ultimate explanation. Mathematics makes causation as limpid we can have it.

This theory met with the approval of Philip Thrift (March 27, 2015): “I agree exactly with the words Patrice Ayme wrote — but with “mathematics”→”programming”, “mathematical”→”programmatical”, etc.”

I pointed out later to Philip that Classical Programming was insufficient to embrace full human (and quantum!) logic. He agreed.

However the preceding somehow made Massimo P , a professional philosopher, uneasy. He quoted me:

“Patrice: “To claim that mathematics is not causal is beyond belief. Mathematics is all about causality.”

Massimo: It most obviously isn’t. What’s causal about Fermat’s Last Theorem? Causality implies physicality, and most of pure math has absolutely nothing whatsoever to do with physicality.

Patrice: “Causes” in mathematics are also called axioms.”

Massimo: “You either don’t understand what causality means or what axioms are. Or both.”

Well, once he had released his emotional steam, Massimo, a self-declared specialist of “physicality” [sic] did not offer one iota of logic in support of his wished-for demolition of my… logic. I must admit my simple thesis is not (yet) in textbooks…

Insults are fundamentally poetic, illogical, or pre-logical. Massimo is saying that been totally confused about causality and explanations is a sacred cow of a whole class of philosophers (to whom he had decided he belongs). Being confused about causality started way back.

“All philosophers, “said Bertrand Russell,” imagine that causation is one of the fundamental axioms of science, yet oddly enough, in advanced sciences, the word ’cause’ never occurs … The law of causality, I believe, is a relic of bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm …”

Russell was as wrong as wrong could be (not about the monarchy, but about “causation”). He wrote the preceding in 1913, when Relativity was well implanted, and he, like many others, was no doubt unnerved by it.

Poincare’ noticed, while founding officially “Relativity” in 1904, that apparent succession of events was not absolute (but depended upon relative motions).

Indeed.

But, temporal succession is only an indication of possible causality. In truth causality exists, if, and only if, a logical system establishes it (moreover, said logic has to be “true”; that, assigning a truth value, is, by itself is a separate question that great logicians have studied without clear conclusions).

When an explanation can be fully mathematized, it is finished. Far from being “abstract”, it has become trivial, or so suppose those with minds for whom mathematics is obvious.

Mathematics is just like 2 + 2 = 4, written very large.

Fermat’s Last Theorem is not different in nature, from 2 + 2 = 4… (But for something very subtle: semantic drift, and a forest of theorems used as axioms to go from side of Fermat’s theorem to the other.)

To brandish mathematics as unfathomable “abstract” sorcery, as was done in Scientia Salon, is a strange, but not new, streak.

There in “Abstract Explanations In Science” Massimo and another employed philosopher pondered “whether, and in what sense, mathematical explanations are different from causal / empirical ones.”

My answer is that mathematical, and, more generally logical, explanations are the model of all explanations. We speak (logos) and thus we communicate our thoughts. Even to ourselves.

The difference between mathematics and logic? Mathematics is more poetical. For example, Category Theory is not anchored in logic, nor anywhere else. It is hanging out there, beautiful and useful, a castle in the sky, just like all and any poem.

Such ought to be the set-up on the nature of what causality could be, to figure out what causality is in the physical world. Considering that Quantum Entanglement is all over nature, this is not going to be easy (and it may contain a hidden clock).

Patrice Ayme’

Jesus, From Good To Bad

March 13, 2015

Talking too much about god is not viewed as serious philosophy in Europe anymore. However, just look at Charlie Hebdo, Putin, or the CIA accusing Julian Assange to have kissed a consenting woman wrong to see the error of the ways of ignoring how imbeciles think.

Ignoring Hitler was not profitable to higher intellectual types, let’s not repeat the mistake.

IF YOU BELIEVE JESUS EXISTED, SO DID MOST PLUTO SERVING MYTHS:

“Evidence”, in law, history, and much of science, is all about establishing in what “universe” (in the sense given in Logical Treatises) the logos of the debate is going to live.

Informal Bayesian analysis is used all the way to do so. It is informal, because it depends blatantly upon subjective elements (so does all and any logos).

It can be fraught: some used it to “prove” the existence of Jesus, or its opposite.

I wrote against the historicity of Jesus, for decades. In the USA, this makes you less appreciated than if you wrote against the car. But Jesus is central to tolerating the plutocratic order (strangely enough, as the Gospels clearly despise wealth).

Thinking Out Of The Box Works, Even For Gnus.

Thinking Out Of The Box Works, Even For Gnus.

Carrier is a historian not infeodated to Christianism. In the USA, an entire propaganda is directed against these people, calling them “Gnu Atheists”.

I just consulted Carrier’s (very recent) work:

http://www.bibleinterp.com/articles/2014/08/car388028.shtml

Carrier’s arguments about the inexistence of Jesus, the person, are purely logical, and similar to those I long published. However he misses more general arguments which I used. First observation: at the time, Jesus-like characters were a dime a dozen.

Some of the Jesus look-alike, who really existed, violated the law, and were tried and executed (we have the historical records). Some died in Rome, some in the Orient.

Before I pursue the general theory, let me insist a bit using more arguments against the existence of Jesus the person.

It is often say that Tacitus speaks of Jesus (however, Josephus, the top Jewish general, writing 39 years earlier his gigantic history of Judea, did not).

Tacitus wrote the Annals in 109 CE. That was 45 years after Saint Paul spent some time inventing Cristus in his golden prison in Rome (I say). According to me, Saint Paul was exfiltrated from Rome (for the same reason that he was brought to Rome in the first place, to escape execution in Jerusalem).

Saint Paul obviously had very high contacts inside the Roman state (his exfiltration from Judea was already quite a risk for Rome. Four years after Saint Paul’s writing, the first Evangels/Gospels are written by supposed “eyewitnesses” of Cristus (although Josephus, who was in the best position to know everything, was not in the know).

Many top Romans obviously felt Cristus was a better deal than those pesky Jews. And presented a golden opportunity for a universal religion (as all religion had a top god, it could be identified to the one of Jesus).

Indeed, by 300 CE, Christianism had extended massively a Romanitas of sorts, well beyond the Roman LIMES (the military border). (It is even rumored that at least one emperor was a closet Christian during the Third Century).

We know, from various documents, that very high officials in Rome, were engaged in the Christian conspiracy, early on. (Some declared they would write Gospels during their retirement…)

The idea of Christianism was not too bad, at first sight: it was to reintroduce the Republic, through the “Christian Republic”, a sort of sea monster that kept on reappearing until 1789…

As early as the Eight Century, the Venetian Republic blossomed under the wings of the Franks (Charlemagne no doubt saw himself as the new Augustus… Or more exactly, DAVID).

 

Last, no least: the Annals were discovered by religious people, in religious establishments. In various Abbeys, Monasteries, and Monte Cassino. Rumors of forgeries are as old as their discovery. Are the “Cristus” passages authentic?

***

A good way to understand the root of a flawed reasoning is to understand the logic that exert psychological pressure to produce that lie. There was a need for a Jesus character, so plenty of Jesus characters were produced, by the general logic in attendance.

What was that logic?

Jewish faith was Judeo-centric. It had a great strength: an undivided god. Many religions recognized a god of the gods, but having no god but god was simpler, and less subject to contradictions, while being more sympathetic to a state led by just one “Prince” (Princeps).

A message more oriented towards all people, not just Jews, and normal human ethology, that is, with more love than Rome experienced, fit the species better.

Hence a full century before the alleged Jesus, there was another, just like him in his philosophical message, but this one gentleman was fully historically documented, in Alexandria.

The logic wanted a Jesus, so Saint Paul produced it (with several caveats in his writings which basically recognized he made Jesus up, and those caveats were produced by me, long ago, and Carrier, more recently).

When Laplace furthered “Bayesian” analysis, he was interested by some games of chance.

When philosophers produce truth, they do not blindly parrot gnu logic. Gnus are herd animals, travelling by the millions. Gnu Christians have stampeded all over civilization for 17 centuries.

How does new philosophy produce new truth? By pondering why gnus do what they do.

Why did Saint Paul want Jesus to be? Why was the “Jesus” message welcomed by the empire? Emperors and bishops who governed the empire in 400 CE, had interest to eliminate the logics those questions called for.

New truth is produced by introducing new facts, which break the universe the old logic rested on.

The best way to do that, is through a meta-logic making the old logic a special case (as General Relativity did to Classical Gravitation).

Arguably, Jesus was just the meta-logic towards a more human society, which the Roman Empire was sorely in need of.

Having a reason for Jesus the myth, makes the historical Jesus less likely. It explains the frantic anxiety of those fragile types who are afraid they cannot cuddle with their idol anymore.

What sort of reasoning is this? Having a different

reason for a hypothesis can make axioms that led to this hypothesis superfluous. This is not properly speaking what came to be called “Bayesian” (a recent term) analysis. But it is related.

When Laplace presented his book on Celestial Mechanics to Napoleon, the tyrant retorted: ”I do not see God in your book.” Laplace retorted: “I did not need this hypothesis.”

Emotional Thinking Is Superior Thinking

March 11, 2015

By claiming that emotional thinking is superior, I do not mean that “logical” thinking ought to be rejected. I am just saying what I am saying, and no more. Not, just the opposite, “logical” thinking ought to be embraced. However, there are many “logical” types of thought possible.

Emotional and logical thinking can be physiologically distinguished in the brain (the latter is mostly about axons; the former about the rest).

Any “logical” thinking is literally, a chain made of points. (And there are no points in nature, said a Quantum Angel who passed by; let’s ignore her, for now!)

Elliptic Geometry In Action: Greeks, 240 BCE, Understood The Difference Between Latitude & Geodesic (Great Circle)

Elliptic Geometry In Action: Greeks, 240 BCE, Understood The Difference Between Latitude & Geodesic (Great Circle)

Some say that hard logic, and mathematics is how to implement “correct thinking”. Those who say this, do not know modern logic, as practiced in logic departments of the most prestigious universities.

In truth, overall, logicians spent their careers proposing putative, potential foundations for logic. Ergo, there is no overall agreement, from the specialists of the field themselves, about what constitute acceptable foundations for “logic”.

It is the same situation in mathematics.

Actually dozens of prestigious mathematicians (mostly French) launched themselves, in the 1950s into a project to make mathematics rigorous. They called their effort “Bourbaki”.

Meanwhile some even more prestigious mathematicians, or at least the best of them all, Grothendieck, splendidly ignored their efforts, and, instead, founded mathematics on Category Theory.

Many mathematicians were aghast, because they had no idea whatsoever what Category Theory could be about. They derided it as “Abstract Nonsense”.

Instead it was rather “Abstract Sense”.

But let’s take a better known example: Euclid.

There are two types of fallacies in Euclid.

The simplest one is the logical fallacy of deducing, from emotion, what the axioms did not imply. Euclid felt that two circles which looked like they should intersect, did intersect. Emotionally seductive, but not a consequence of his axioms.

Euclid’s worst fallacy was to exclude most of geometry, namely what’s not in a plane. It’s all the more striking as “Non-Euclidean” geometry had been considered just prior. So Euclid closed minds, and that’s as incorrect as incorrect can be.

To come back to logic as studied by logicians: the logicS considered therein, are much general than those used in mathematics. Yet, as no conclusion was reached, this implies that mathematics itself is illogical. That, of course, is a conclusion mathematicians detest. And the proof of their pudding is found in physics, computer science, engineering.

So what to do, to determine correct arguments? Well, direct towards any argument an abrasive, offensive malevolence, trying to poke holes, just as a mountain lion canines try to pass between vertebras to dislocate a spine.

That’s one approach. The other, more constructive, but less safe, is to hope for the best, and launched logical chains in the multiverses of unchained axiomatics.

Given the proper axioms, (most of) an argument can generally be saved. The best arguments often deserve better axiomatics (so it was with Leibnitz’s infinitesimals).

So, de facto, people have longed been using not just “inverse probability”, but “inverse logic”. In “inverse logic”, axioms are derived from what one FEELS ought to be a correct argument.

Emotions driving axiomatics is more metalogical, than axiomatics driving emotions.

***

To the preceding philosophy professor Massimo Pigliucci replied (in part) that:

“Patrice, 

“…Hence, to think critically, one needs enough facts. Namely all relevant facts.”

Enough facts is not the same as all the relevant facts, as incorrectly implied by the use of “namely.” 

“It is arrogant to think that other people are prone to “logical fallacies”.”

It is an observation, and facts are not arrogant. 

“A Quantum Wave evaluates the entirety of possible outcomes, then computes how probable they are.”

Are you presenting quantum waves as agents? They don’t evaluate and compute, they just behave according to the laws of physics.

“just as with the Quantum, this means to think teleologically, no holds barred”

The quantum doesn’t think, as far as I know. 

“Emotional Thinking Is Superior Thinking” 

I have no idea what you mean by that. Superior in what sense? And where’s the bright line between reason and emotion?

“Any “logical” thinking is literally, a chain made of points”

No, definitely not “literally.” 

It may not follow from the axioms, but I am having a hard time being emotionally seductive by intersecting circles. 

“Euclid’s worst fallacy was to exclude most of geometry, namely what’s not in a plane.”

That’s an historically bizarre claim to make. Like saying that Newton’s worst fallacy was to exclude considerations of general relativity. C’mon. 

“as no conclusion was reached, this implies that mathematics itself is illogical” 

Uhm, no. 

“to hope for the best, and launch logical chains in the multiverses of unchained axiomatics” 

Very poetic, I have no idea what that means, though.”

***

Massimo Pigliucci is professor of philosophy at CUNY in New York, and has doctorates both in biology and philosophy. However, truth does not care about having one, or two thousands doctorates. It would take too long to address all of Massimo’s errors (basically all of his retorts above). Let me just consider two points where he clings to Common Wisdom like a barnacle to a rock. The question of Non-Euclidean geometry, and of the Quantum. He published most of the answer below on his site:

Dear Massimo:

Impertinence and amusement help thought. Thank you for providing both. Unmotivated thought is not worth having.

The Greeks discovered Non-Euclidean geometry. It’s hidden in plain sight. It is a wonder that, to this day, so many intellectuals repeat Gauss’ self-serving absurdities on the subject (Gauss disingenuously claimed that he had discovered it all before Janos Bolyai, but did not publish it because he feared the “cries of the Beotians”… aka the peasants; Gauss does not tell you that a professor of jurisprudence had sketched to him how Non-Euclidean geometry worked… in 1818! We have the correspondence.).

The truth is simpler: Gauss did not think of the possibility of Non-Euclidean geometry (although he strongly suspected Euclidean geometry was not logical). Such a fame greedster could not apparently resist the allure of claiming the greatest prize…

It is pretty abysmal that most mathematicians are not thinking enough, and honest enough, to be publicly aware of Gauss’ shenanigans (Gauss is one of the few Muhammads of mathematics). But that fits the fact that they want mathematics to be an ethereal church, the immense priests of which they are. To admit Gauss got some of his ideas from a vulgar lawyers, is, assuredly, too painful.

That would be too admit the “Prince of Mathematics” was corrupt, thus, all mathematicians too (and, indeed, most of them are! Always that power thing; to recognize ideas have come out of the hierarchy in mathematics is injurious to the hierarchy… And by extension to Massimo.)

So why do I claim the Greeks invented Non-Euclidean geometry? Because they did; it’s a fact. It is like having the tallest mountain in the world in one’s garden, and not having noticed it: priests, and princes, are good at this, thus, most mathematicians.

The Greek astronomer Ptolemy wrote in his Geography (circa 150 CE):

“It has been demonstrated by mathematics that the surface of the land and water is in its entirety a sphere…and that any plane which passes through the centre makes at its surface, that is, at the surface of the Earth and of the sky, great circles.”

Not just this, but, nearly 400 years earlier, Eratosthenes had determined the size of Earth (missing by just 15%).

http://en.wikipedia.org/wiki/Eratosthenes

How? The Greeks used spherical geometry.

Great circles are the “straight lines” of spherical geometry. This is a consequence of the properties of a sphere, in which the shortest distances on the surface are great circle routes. Such curves are said to be “intrinsically” straight.

Better: Eusebius of Caesarea proposed 149 million kilometers for the distance of the Sun! (Exactly the modern value.)

Gauss, should he be around, would whine that the Greeks did not know what they were doing. But the Greeks were no fools. They knew what they were doing.

Socrates killed enemies in battle. Contemporary mathematicians were not afraid of the Beotians, contrarily to Gauss.

Aristotle (384-322 BC) was keen to demonstrate that logic could be many things. Aristotle was concerned upon the dependency of logic on the axioms one used. Thus Aristotle’s Non-Euclidean work is contained in his works on Ethics.

A thoroughly modern approach.

The philosopher Imre Toth observed the blatant presence of Non-Euclidean geometry in the “Corpus Aristotelicum” in 1967.

Aristotle exposed the existence of geometries different from plane geometry. The approach is found in no less than SIX different parts of Aristotle’s works. Aristotle outright says that, in a general geometry, the sum of the angles of a triangle can be equal to, or more than, or less than, two right angles.

One cannot be any clearer about the existence on Non-Euclidean geometry.

Actually Aristotle introduced an axiom, Aristotle’s Axiom, a theorem in Euclidean and Hyperbolic geometry (it is false in Elliptic geometry, thus false on a sphere).

Related to Aristotle’s Axiom is Archimedes’ Axiom (which belongs to modern Model Theory).

One actually finds non trivial, beautiful NON-Euclidean theorems in Aristotle (one of my preferred frienemies).

Non-Euclidean geometry was most natural: look at a sphere, look at a saddle, look at a pillow. In Ethika ad Eudemum, Aristotle rolls out the spectacular example of a quadrangle with the maximum eight right angles sum for its interior angles.

Do Quantum Wave think? Good question, I have been asking it to myself for all too many decades.

Agent: from Latin “agentem”, what sets in motion. Quantum waves are the laws of physics: given a space, they evaluate, compute. This is the whole idea of the Quantum Computer. So far, they have been uncooperative. Insulting them, won’t help.

Patrice Ayme’

In Defense of Metaphysics

March 3, 2015

Metaphysics Rises From Brainy Ground, Including What Dreams Are Made Of:

Nihilism is what happens when Metaphysics is yanked out. And nothing could be further from the Truth. Much of the trouble with the present planet can be directly tracked to inappropriate Metaphysics. Let me explain this a bit in this first part. Metaphysics is, indeed, about religion, what ties people up together again, but it’s also about Logic (with a capital L).

Metaphysics is everywhere. Everybody uses it, in everyday language. Some will say: ”Oh, that just language.” Yes, but language is ideas, and ideas get embodied as brain structure. So here we see that not only metaphysics exists, but it is physically embodied. Any victim of Jihadism can testify that metaphysics has real consequences.

Before defining Metaphysics, one has to define “physics”. That was what the Romans called nature. Physis is “nature,” from phyein “to bring forth, produce, make to grow” (related to phyton “growth, plant,” phyle “tribe, race,” phyma “a growth, tumor”). Physis is everywhere.

Imagine You Are Being Watched Maybe Help You Be Good

Imagine You Are Being Watched Maybe Help You Be Good

[Helix Nebula, an expanding shell from a dying star, 700 light years away, being transformed into a White Dwarf; ESO Southern Observatory Near IR on the left; visible light, on the right.]

Science is physical phenomena so well known that they can allow us to predict how things will grow.

But in the original sense all discourses about what exists and changes in nature is part of physics. Call this

Common Sense. Call it CS. Metaphysics is about telling a story beyond what we are sure all others will agree they also observed.

The brain is all connected inside: that’s how logic is embodied. Much of it is individualized, call it I. So we have: Metaphysics = I – CS.

CS depends upon one’s tribe. Metaphysics and I are, of course dependent upon the Individual.

And what is the Individual (brain) made of, and with? Experiences. Individualized experiences. All metamathematics is, clearly Metaphysics in some sense (arguably mathematics itself is, literally, metaphysics).

Set Theory is Metaphysics. That can be demonstrated easily: as all genuine Metaphysics, it is full of contradiction(s).

Badiou’s aphorism holds up pretty well: “Set theory is all the metaphysics you need, and physics all of the ontology.” Well, all the Metaphysics you needed to do some basic mathematics in the Twentieth Century… But not even all of it, hence the rise of Category Theory in the 1950s. This demonstrates that Metaphysics is not just beyond physics, it is eminently practical (because the mathematics invented by Grothendieck, PBUH, this advanced Algebraic Geometry is… practical).

Category Theory is very practical Metaphysics.

Carefully tailored Metaphysics can prove much. For example, I can devise Metaphysics where the circle can be squared (I just use my finite mathematics, end of proof).

According to the definition I gave above, the very definition of Metaphysics is that it is the set of all thought systems that harbor contradiction(s) to Common Sense (otherwise it would be Common Sense).

Could we do without Metaphysics?

Of course not. Metaphysics is beyond Common Sense. But we know something exists beyond Common Sense, say for example future or possible science, research projects, and, in general various guesses of all sorts, including artistic ones.

So it’s not all a question of “data”, in the restrictive sense, or, even more generally, “quanta”. Metaphysics is grounded beyond “data” in the restrictive sense. Guesses, intuitions, even desires and provocations, the feeling of what might be, or ought to be, are part of the “data” that grounds Metaphysics.

Our minds are not just grounded in hard facts, personal experiences and feelings, but even dreams, vague tendencies, feelings and emotions, let alone collective rages, and misunderstanding of history, much of it that we harbor in our inner spiritual recesses, as various infantile trauma.

In all this our very individualized Metaphysics are grounded. Lots of dream stuff. So people are invited not to deduce lethal consequences from it.

We need it.

But we also need to understand it. Especially when it animates our thermonuclear hands. Or those of others. Especially those of others. To understand, and fight, the Metaphysics of banksters, greedsters, archeo-imperialists, or Islamofascists, we need better, improved, and fully aware Metaphysics.

Patrice Ayme’

The Brain Is A Web, Thus So Is Reason

February 25, 2015

When one looks at linguistics, biological evolution (in the most general sense, including eco-systems), neurology, civilization, one does not see trees, inasmuch as we see webs with reactive (not to say intelligent) strands.

The latest news, in neurology is, indeed, that the white matter, made of glial cells, axons, oligodendrocytes, etc. itself reacts (not to say “think”).

Why should not reason itself be the same?

Should the brain be according to reason, or reason, according to the brain?

Once reason has become a web, it has a non-trivial topology (in particular with a genus).

Old Explanation: Genetic Evaluation. Truth: Plain Scary

Old Explanation: Genetic Evaluation. Truth: Plain Scary

The genius of genus.

The next natural question is whether reason has a metric, a geometry, a notion of proximity. Indeed look at the brain, namely, the mind. Some nerve impulses go as far as possible, all along a motor neuron, or along a long axon. However, others go short, and are manipulated, not just at the first synapse they meet, but even before this, along the axons themselves.

Some will fear for reason, as they read these lines.

But a model exists, in physics. Renormalization.

As a field strength augments, approaching its putative source, the field itself modifies the effect it is supposed to describe. There is no general theory of how this works (although some field theories are known as “renormalizable”).

To put it in terms non-specialists may understand, the force varies with, and because of, the force itself. In any case, electromagnetism starts with 1/d^2, and ends up very different.

Actually, any physical explanation adorned with a non-linear feedback resists to linear logic (hence the problems modelling a lot of natural phenomena, from thermonuclear fusion to hypersonic flow).

Look at the mood of submission to Islam: ever since the Charlie Hebdo attack, a general bending of reason to savagery, a vile submission to unreason, with the pretext of tranquility, has imposed itself. Representing something resembling what some imagined a so-called “Messenger” would look like is frowned upon. Outright censorship is applied in the Anglosphere.

(It is not just ironical, but an example of thoroughly dysfunctional, discontinuous, self-contradictory reason: as depicting human beings and gods is forbidden in Islam, so how would true Islamists know what their god, dog, messenger, or whatever would look like?)

Another meta-example is graciously offered by mathematics itself. Mathematics is the field depicting reason itself. However, it does not have safe foundations.

Category Theory was actually strong, because of the mood that underlays it. Namely that foundations, globally, should not be worried about, while, locally, much progress can done by making them richer (that’s what Grothendieck did).

So, once again, we see that reason is rich, and can get richer, but it is local, not global.

Evolution, as understood for most of the Twentieth Century, was driven by chance or weird considerations about reproduction (animals were supposed to prefer so and so because it allowed them to reproduce better… as if they cared!) For example, we were told the Peacock’s tail was there to show to females the beholder was healthy, hence would bear them more children.

(More refined recent studies show the obvious, just as with eyes on butterfly wings, Peacock tails may rather be scary devices, as anybody who has deployed an umbrella for a lions would know.)

Chance is here to say, but the big Damocles sword over facile explanations is Quantum Physics itself: the Quantum is teleological, and no doubt impact both genetics and epigenetics.

This means that the most inner machinery is not just potentially teleological, but really teleological.

That Biological Evolution did not exploit what is, after all, the most fundamental law of the physical world, the Quantum Process, as it progressed according to physics for 4 billion years, is, frankly, impossible.

Patrice Ayme’