Posts Tagged ‘Descartes’

I FEEL, THEREFORE I AM

December 3, 2015

Descartes Cut Down To Size, Consciousness Extracted:

Concepts such as “consciousness”, “free will”, “sentience”, or (to sound learned) “qualia”, are often brandished, without connecting them to (what are called in Quantum Physics) “observables“.

I will try to correct this here. I will associate “consciousness”, “free will”, “sentience”, or “qualia” with something observable, namely unpredictability. This enables me to claim that even simple animals have emotions, consciousness, etc.

Yet my approach, unpredictability, provides with a measure (of consciousness, free will, sentience, qualia), thus does not put all species in the same basket (as the unpredictability a mind is capable of will vary; and not just vary as a number).

Approaching intelligence through unpredictability does not fall in the same excesses as Princeton’s Peter Singer and other in the “animal rights movement who claim (with the Nazis) that fleas and humans have equal rights (so we may as well treat humans as fleas).

The notion of “observable” was central to the birth of Quantum Physics, and still is.

Aplysia: Brainy, Thus Sentient, Conscious & Also Unpredictable

Aplysia: Brainy, Thus Sentient, Conscious & Also Unpredictable

Clever Enough to Become a Plantimal…

There are more than 3,000 species of “Nudibranchs”, these sea slugs, as their branchies are nude… Just when you thought you were safe from French. the particular one above steal genes from photosynthesizing algae it eats. Then it becomes solar powered. Science does not know how this works, because Obama prefers to finance his friend Elon Musk rather than fundamental research in genetics and solar power. TO SOLVE THE GREEN HOUSE GASES CRISIS, ONE NEEDS MORE FUNDAMENTAL RESEARCH. NOW. Finance it with the 6,000 billions dollars given to fossil fuel plutocrats and their obsequious lackeys.

The notion of “observable” is central to Quantum Physics, and irritated Einstein, especially when Heisenberg pointed out that it was he, Einstein, who had introduced it in physics. It is no coincidence that I was driven to capture it for consciousness: it is central to science.

Quantum processes “behave” as if they were conscious of the environment at a distance. Einstein is unhappy in his grave: reality has turned into his worst nightmare. Poor Einstein was very much a Nineteenth Century physicist, he did not graduate to the new age of implicit wholeness.

Chris Snuggs: “As if” they are conscious. Does that mean they ARE conscious?

Patrice: Not an answerable question, because Quantum Processes cannot be interviewed. But I am sure that the feeling of consciousness is rooted in those Quantum Processes. Precisely because Quantum Processes behave as one could imagine elements of consciousness to behave: they are both unknowable and retrospectively determinable.

Chris Snuggs: “Does not “consciousness” need a brain?”

Patrice Ayme: First for the simple answer. What’s a brain? A set of neural networks. Aplysia has around 650 neurons.

A brain “thinks”. What is thinking? How do we know that an animal thinks? When it behave in a way we cannot always predict. Thinking manifests itself by the ability to make a (set of) neural network(s) behave UNPREDICTABLY.

Thinking is detected by the ability to go beyond (rote) learning, thus, to be unpredictable.

At least that’s what I claim. I claim this, because that’s the best I can… think of. What else?

Here is an example illustrating the preceding concepts. I met a giant sea turtle in the ocean. I knew it was thinking. How? because it showed a lot of initiative (especially for a supposedly stupid reptile).

First it determined I was no threat. It swam towards me. I could see its eye moving, inspecting me. I could not predict what it was going to do. It extended a vast flipper next to my fingers, we delicately touched. It was a sort of respectful handshake across 400 million years of evolution. I have been at the (very obscure, as it should be!) Sistine Chapel, at the Vatican, where God extents a finger to man.

This was much better. A flipper was extended, from turtle, to human. Then my reptilian friend slowly dove. I had done nearly nothing. The sea creature had created the encounter. Deliberately. Unpredictably.

Two days earlier me and the same turtle (it is particularly large, so I know it was the same one) swam on the surface in a particularly strong current, in the exact same spot, so it probably recognized me: sea turtles have color vision, and I am unmistakable with bright fluorescent orange and yellow shirt, pants and socks and giant bright yellow fins.

It is precisely because a human being, the world’s smartest animal, cannot predict the behavior of another organism, that we know that this organism thinks, is conscious, has sentience. The first time it decided to swim 5 feet apart, although I was all business, having trouble with the current, and not interested at that point by socializing sea monsters. My sole aim was to regain the beach, 400 meters away, past a sea cliff.

“Sentience” comes from the Latin sentientem (nominative sentiens) “feeling,” present participle of sentire “to feel”. The turtle was at the very least intrigued by my behavior the first time (‘crazy human swims against current pretending to be a turtle’), and was interested to inspect me some more.

In the case of three neurons, free will (or at least unpredictability) has been demonstrated. https://patriceayme.wordpress.com/…/three-neurons-free…/

The question of what is “consciousness” and how it can be determined to happen arises. That’s harder.

In “Surveiller et Punir” (mistranslated in English as “Discipline and Punish” instead of “Surveillance and Punishment”) Michel Foucault quoted at length the full execution of Damiens a religious fanatic who had pricked Louis XV with a knife. Foucault wanted to show how punishment changed. That gives me a justification to set-up my own gory scheme.

Descartes is famous for his “I think therefore I am”. What he was after was finding the simplest, most fundamental basis to start from. So doing, he made a huge mistake.

Indeed, one does not need to think to know that one is.

That can easily be shown by a thought experiment. Grab Descartes, tie him up on a table. The strength and number of bounds is important. Then take a rusty saw, and start to cut Descartes’ leg off. After Descartes puts in doubt your philosophical qualifications, he will start screaming. By the time you get to the sensitive nerves, next to the bone, his discourse will have lost any apparent method. At this point Descartes will not be thinking, but busy screaming his head off. Still, he would be fully existent, and feeling more alive than ever.

Thus sentience is more fundamental than thinking.

This shows, once again, that correct thinking starts with the correct feelings, moods, emotions.

This has many applications. When people extol Christianism, or Islamism, as if they were civilizations, instead of crazy superstitions with a very LETHAL Dark Side, one has to ask whether they set-up the mood of the Enlightenment.

When “leaders’ gather in Paris for the Green House Gas (GHG) crisis, are they aware of the correct emotion, the correct mood, that they should be infused with? Namely that they have only a few years to research the technologies which will allow to get rid of the GHG crisis, or an unprecedented holocaust, of the entire biosphere, may, or  will happen…

And will they be conscious that it will be their fault, and the fault of the 6,000 billion dollars of yearly fossil fuel subsidies they preside over, like the ecological terrorists they are?

This is an example of the following:

Verily, if you want to think right, you have to feel right. First.

Patrice Ayme’

The Turing Test Doesn’t Matter

June 18, 2014

More precisely: The Descartes (-Turing) Test Is Stupid

The “Turing Test” is a big deal in Artificial Intelligence and logic, for reasons that are assuredly not flattering. As the “Test” is obviously flawed. The Test confuses conversation and imagination, while identifying both to intelligence.

The fundamental error of this Descartes Test is mathematical (ironical, as Descartes was one of the greatest mathematicians, ever: he invented algebraic geometry, the foundation of all modern science and technology).

The Descartes Test overlooks the fact that the set of all possible conversations is not just countable, but even, certainly, in practice, finite. Thus a machine could plausibly, encompass all possible conversations, as it means interlinking a finite set with logical chains.

(To excuse Descartes, the notion of countability had not yet been clearly defined in his time; it leads, in turn, to the finiteness of speech, modulo my finite mood.)

Other objections to the Descartes Test show up in the essay reblogged below (which I wanted to write long ago, and have alluded to, here and there).

The “Turing” Test certainly ought to be called the Descartes Test, in light of the quote given in the attached essay. To know that the Turing test was actually invented by Descartes is of no small consequence.

Who “invented” what is not just a question of justice. And no just a question of the history of the systems of thought. It’s also a question of logic: knowing an idea appeared early on is a hint that it ought to be obvious, for example.

This process of associating the correct labels ought to be extended to all fields on inquiry. For example, Johanus Buridanus formulated clearly the law of inertia, circa 1320. That’s more than three centuries before the Anglo-Saxon gentleman generally celebrated as its author, was born.

This is a testimony that the Church was incredibly efficient, in the late Fifteenth Century, in its repression of advanced thinking. Buridan was put to the “Index”… Except in Cracow, where Copernicus studied, and, when he was dying, the latter re-published Buridan heliocentric proposition.

This ought to be a warning to the pseudo-scientist attitude about the Multiverse and Strings: too much craziness could lead to an anti-science backlash, on the ground of common sense.

(Fortunately, there is biology, which is much more scientific than physics, these days!)

The Turing Test pretends that intelligence is all about conversation, a finite process. It’s not. It’s about imagination (a much larger process).
Patrice Ayme’

Scientia Salon

turing testby Massimo Pigliucci

You probably heard the news: a supercomputer has become sentient and has passed the Turing test (i.e., has managed to fool a human being into thinking he was talking to another human being [1,2])! Surely the Singularity is around the corner and humanity is either doomed or will soon become god-like.

Except, of course, that little of the above is true, and it matters even less. First, let’s get the facts straight: what actually happened [3] was that a chatterbot (i.e., a computer script), not a computer, has passed the Turing test at a competition organized at the Royal Society in London. Second, there is no reason whatsoever to think that the chatterbot in question, named “Eugene Goostman” and designed by Vladimir Veselov, is sentient, or even particularly intelligent. It’s little more than a (clever) parlor trick. Third, this was actually the second time that a chatterbot passed…

View original post 2,075 more words

FINITE CALCULUS

October 31, 2013

If we want to get real smart, we will have to let no reason unturned. Foundations of calculus have been debated for 23 centuries (from Archimedes to the 1960s’ Non Standard Analysis). I cut the Gordian knot in a way never seen before. Nietzsche claimed he “made philosophy with a hammer”, I prefer the sword. Watch me apply it to calculus.

I read in the recent (2013) MIT book “The Outer Limits Of Reason” published by a research mathematician that “all of calculus is based on the modern notions of infinity” (Yanofsky, p 66). That’s a widely held opinion among mathematicians.

Yet, this essay demonstrates that this opinion is silly.

Instead, calculus can be made, just as well, in finite mathematics.

This is not surprising: Fermat invented calculus around 1630 CE, while Cantor made a theory of infinity only 260 years later. That means calculus made sense without infinity. (Newton used this geometric calculus, which is reasonable… with any reasonable function; it’s rendered fully rigorous for all functions by what’s below… roll over Weierstrass… You all, people, were too smart by half!)

If one uses the notion of Greatest Number, all computations of calculus have to become finite (as there is only a finite number of numbers, hey!).

The switch to finitude changes much of mathematics, physics and philosophy. Yet, it has strictly no effect on computation with machines, which, de facto, already operate in a finite universe.

In the first part, generalities on calculus, for those who don’t know much; can be skipped by mathematicians. Second part: original contribution to calculus (using high school math!).

***

WHAT’S CALCULUS?

Calculus is a non trivial, but intuitive notion. It started in Antiquity by measuring fancy (but symmetric) volumes. This is what Archimedes was doing.

In the Middle Ages, it became more serious. Shortly after the roasting of Johanne’ d’Arc, southern French engineers invented field guns (this movable artillery, plus the annihilation of the long bow archers, is what turned the fortunes of the South against the London-Paris polity, and extended the so called “100 year war” by another 400 years). Computing trajectories became of the essence. Gunners could see that Buridan had been right, and Aristotle’s physics was wrong.

Calculus allowed to measure the trajectory of a canon ball from its initial speed and orientation (speed varies from speed varying air resistance, so it’s tricky). Another thing calculus could do was to measure the surface below a curve, and relate curve and surface. The point? Sometimes one is known, and not the other. Higher dimensional versions exist (then one relates with volumes).

Thanks to the philosopher and captain Descartes, inventor of algebraic geometry, all this could be put into algebraic expressions.

Example: the shape of a sphere is known (by its definition), calculus allows to compute its volume. Or one can compute where the maximum, or an inflection point of a curve is, etc.

Archimedes made the first computations for simple cases like the sphere, with slices. He sliced up the object he wanted, and approximated its shape by easy-to-compute slices, some bigger, some smaller than the object itself (now they are called Riemann sums, from the 19C mathematician, but they ought to be called after Archimedes, who truly invented them, 22 centuries earlier). As he let the thickness of the slices go to zero, Archimedes got the volume of the shape he wanted.

As the slices got thinner and thinner, there were more and more of them. From that came the idea that calculus NEEDED the infinite to work (and by a sort of infection, all of mathematics and logic was viewed as having to do with infinity). As I will show, that’s not true.

Calculus also allows to introduce differential equations, in which a process is computed from what drives its evolution.

Fermat demonstrated the fundamental theorem of calculus: the integral was the surface below a curve, differentiating that integral gives the curve back; otherwise said, differentiating and integrating are inverse operations of each other (up to constants).

Arrived then Newton and Leibnitz. Newton went on with the informal, intuitive Archimedes-Fermat approach, what one should call the GEOMETRIC CALCULUS. It’s clearly rigorous enough (the twisted examples one devised in the nineteenth century became an entire industry, and graduate students in math have to learn them. Fermat, Leibnitz and Newton, though, would have pretty much shrugged them off, by saying the spirit of calculus was violated by this hair splitting!)

Leibnitz tried to introduce “infinitesimals”. Bishop Berkeley was delighted to point out that these made no sense. It would take “Model Theory”, a discipline from mathematical logic, to make the “infinitesimals” logically consistent. However the top mathematician Alain Connes is spiteful of infinitesimals, stressing that nobody could point one out. However… I have the same objection for… irrational numbers. Point at pi for me, Alain… Well, you can’t. My point entirely, making your point irrelevant.

***

FINITUDE

Yes, Alain Connes, infinitesimals cannot be pointed at. Actually, there are no points in the universe: so says Quantum physics. The Quantum says: all dynamics is waves, and waves point only vaguely.

However, Alain, I have the same objection with most numbers used in present day mathematics. (Actually  the set of numbers I believe exist has measure zero relative to the set of so called “real” numbers, which are anything but real… from my point of view!).

As I have explained in GREATEST NUMBER, the finite amount of energy at our disposal within our spacetime horizon reduces the number of symbols we can use to a finite number. Once we have used the last symbol, there is nothing anymore we can say. At some point, the equation N + 1 cannot be written. Let’s symbolize by # the largest number. Then 1/# is the smallest number. (Actually (# – 1)/# is the fraction with the largest components.)

Thus, there are only so many symbols one can actually use in the usual computation of a derivative (as computers know well).  Archimedes could have used only so many slices. (The whole infinity thing started with Zeno and his turtle, and the ever thinner slices of Archimedes; the Quantum changes the whole thing.)

Let’s go concrete: computing the derivative of x -> xx. it’s obtained by taking what the mathematician Cauchy, circa 1820, called the “limit” of the ratio: ((x + h) (x + h) – xx)/h. Geometrically this is the slope of the line through the point (x, xx) and (x + h, (x + h) (x + h)) of the x -> xx curve. That’s (2x + h). Then Cauchy said: “Let h tend to zero, in the limit h is zero, so we find 2x.”  In my case, h can only take a number of values, increasingly smaller, but they stop. So ultimately, the slope is 2x + 1/#. (Not, as Cauchy had it, 2x.)

Of course, the computer making the computation itself occupies some spacetime energy, and thus can never get to 1/# (as it monopolizes some of the matter used for the symbols). In other words, as far as any machine is concerned, 1/# = 0! In other words, 1/# is… infinitesimal.

This generalizes to all of calculus. Thus calculus is left intact by finitude.

***

Patrice Ayme

***

Note: Cauchy, a prolific and major mathematician, but also an upright fanatic Catholic, who refused to take an oath to the government, for decades, condemning his career, would have found natural to believe in infinity… the latter being the very definition of god.

I AM, & SOMETIMES I THINK

July 17, 2012

SUM ERGO COGITO:

Abstract: Thinking is what defines us. Agreed.

Yet, from most perspectives, Descartes’ famous “Cogito Ergo Sum“, “I Think Therefore I Am” is (grotesquely) counterfactual, as I show below, from the nature of logic, from science, and from introspection. No, the soul does not come before and independently of the body, Messieurs Descartes and Havel. The reality is the exact opposite.  

Thinking emerges from the rough and tough, it is something that rises only from very complex, very organized matter. It may be the face of god, but it is first an act of human will. Last, and not least, the self extends well beyond conscious thought.

***

LOGIC IS (NEUROLOGICAL) RULES, DATA ARE (NEUROLOGICAL) INPUT; EXISTENCE FIRST:

It often happens, in the course of human debates, that, by manipulating standard concepts from fresh, and sometimes opposite perspectives, one is perceived to say the exact opposite of what one is trying to say. Why? Because much of what passes for thinking is actually perfunctory checking for the presence of a few known facts, in an ancient mood.

(This is not really a failure of the logical system; it turns out perception itself works in the same perfunctory way: 90% of input in the visual system consists of reentrant fibers…)

One consequence of my essay I Mood Therefore I Thinkis the exact opposite conclusion of Descartes’ most famous statement, from a multi pronged attack.

Yet, Paul Handover, the excellent gentleman and versatile thinker who founded the excellent site Learning From Dogs“, in what I fear could be a standard critique, suggested that I complicated matters about thinking, by trying to deviate from Descartes’s “I think therefore I am“. Said he:

“Cogito ergo sum, or as the French would say, “Je pense donc je suis”…surely all you are saying is that famous phrase, “I think, therefore I am”?

Ergo, writing so extensively about moods is complicating something basic to man. Some humans think and some don’t!”

Well, surely not. (Paul later understood what I meant, as the comment section made clear.) I agree that moods, paying attention to moods, considerably complicates the analysis of thinking, as I tried to show, for example, with Socrates’ obsession with pathetic little logic. That itty-bitty logic was just a transparent way to change the conversation from what was really wrong with Athens, namely that it was a slave society… Instead Socrates lived as a hanger-on of the golden youth of Athens, those whose descendants would ultimately collaborate with Macedonian plutocracy (Antipater, and his goons, 322 BCE). About that most grievious logical flaw, he had nothing to say; it was a question of moods.

Living, worldwide, among various natives, all endowed with very varied moods, about the same things, from Silicon Valley to Iran, Black Africa to the Latin Quarter, has taught me that moods dominate logic. Maybe not locally, in a mind, but certainly, globally, throughout a mind.

Recently I was talking to a Silicon (Valley) mini titan, and he asked me how my writing was doing, feigning polite interest, while barely hiding his considerable irritation, hostility and contempt (to all I represented, the Cogito). The mood he projected was clearly not the mood I would have enjoyed at the Café de Flore in Paris. Nor, of course, with such a mood in place, the debate could reach any depth. Silicon Valley does not want depth, just profits and market share, enabled by financial plots, and as little government as possible (while entertaining and financing the president). That’s the mood.

The first thinker to dare criticize Descartes directly was the (ultra-rich) Ludwig Wittgenstein, who went to Cambridge to study with Russell, and taught there, between bouts of building a cabin with his hands in Norway, and renouncing his plutocratic prerogatives. (Although it can be said Sartre & Al. made a covert critique of Descartes, see below.)

Wittgenstein thought Descartes’ famous slogan was pointless. Ludwig used to make fun of Descartes in his Cambridge seminar by loudly remarking:”I think, therefore it rains!” Or: “I think, therefore the sky is blue!” He did not elaborate more than that, I will.

All humans think. Simply some refuse to do it creatively, or have been conditioned, by a special mood, to avoid all and any creative thinking.

On the face of it, Descartes’ “Cogito” statement is ridiculous, as it uses an emerging property to define existence itself. But emergence pre-supposes existence. (And see what Existentialism hinted about the subject below.) And yet we will see the story is a bit more subtle.

***

THE BRAIN EMBODIES LOGIC, PHYSICS, MATHEMATICS:

When one looks at an implication: a > b, one is looking at a piece of neurology. Most mathematicians not only do not understand that, but refuse to understand it, are highly offended by it, and would rather leave the room screaming (they already have). However, so it is.

The wolf can howl to the moon, call it divine, still it is the moon. A physical object. Just like the mathematician can howl to mathematics, call it divine, still, like the moon, it’s just out there. That makes it even more important, but nothing physics did not invent first. 

Mathematicians want to call mathematics divine, for the same reason dogs want to call the moon divine: because, having discovered their object of adoration to be out of this world makes them feel divine about themselves (something very obvious in mathematicians). Descartes, creating the world just from his own thinking, is a typical case.

Reality is much more prosaic, not to say vulgar.

It is well known that a dog trying to get at a ball thrown in the water, will run along the beach just so, and jump in the water according to the optimal trajectory confirmed by electronic computers and 7,000 years of intense human efforts to write down the rules of calculus, so that they could be installed inside said computers.

How do mathematicians think wolves know calculus? (And so do lions, I have seen it.) Because they got the Fields Medal, the Abel Prize? How come the dog takes a year to learn what takes the mathematician 15? Because they read it in books, like human mathematicians?

No, it’s much simpler than that. Wolves have neurobiology which embodies (the) calculus (they need). This is the reason for what Wigner called “the unreasonable effectiveness of mathematics“. The mind is built from the existence of histories experienced. Yes, even in wolves. They make this spiritual construction when they play as puppies.

The puppies play with a lot of possibilities, their minds memorize those that work the best. It’s not building the cathedrals, but it leads there.

(The basic principles of cathedral construction were also found by trial and error, then culturally transmitted… so was calculus, now culturally hammered in, so that young human mathematicians, differently from those poor dogs, do not have to invent it!)

***

THINKING, CONSCIOUSNESS, EMERGE AFTER PLENTY:

Logic is made of (neurological) rules, data consist in (neurological) input (most internally generated). Those exist first. Thinking comes later, it is what is called an Emerging Property.

What is an emerging property? An enormous system is put in place, with an enormous number of interactions, and, as it becomes dynamic, it builds an order, an order that emerges progressively. Even plate tectonic is an emerging property. Crystallization is an example. pain, physical or psychological, another. All societies, even those of ants, are emerging properties.

Clearly, whatever thinking is, it’s an emerging property, because thinking requires a bunch of neurons to come together, first.

Moods and sensations are the indispensable background to any logical system.

It’s not just my opinion, and it’s not just neurological. Open any treatise in logic. OK, it’s easy to get lost within logic, as a quick peek at Stanford Encyclopedia of Philosophy shows . Logic is a universe of its own. Most mathematicians know nothing about it, and don’t want to know (lest they feel beaten at their own game, logical arrogance). To simplify, as usual, I go hard core, by sticking to hard core pragmatism (as found in the best hard science and mathematics).

Judicious simplification leads to better abstraction. I am going to simplify what logic is.

I have studied various logical systems, long and hard, even including Girard’s Linear Logic (invented very recently, in 1987). I have also studied, long and hard, before it became fashionable, Category Theory. Category Theory is literally a rigorous structuralism, a bunch of rules of manifest interest. (Nobody knows if it can replace Set Theory as a Foundation of Mathematics; practitioners don’t care, it’s too useful to give them time for deep meditation.)

My rough (philosophical) conclusion from all this esoterica: any logical system (including categories) consists, at the very minimum of:

1) a set of rules (it could be diagram chasing in a category). Call that the ‘logic‘.

2) a universe of symbols to which these rules apply. Call that the ‘universe‘ (in which that logic operates).

The way I look at it, this corresponds to the way the brain is organized:

 1) corresponds neurologically to an axonal system (including dentrites).

 2) corresponds to the regions (in the brain) the logic starts from (it will varied places, as inputs internal, or external, vary).

Sensation, moods, emotion, neurohormonal regimes act as meta-controllers, upon both the logic and the universe. For example in case of hyper stress, automatic meta controllers acting on gateway neurons will shut down parts of the brain by starving them of oxygen, and redirect oxygen and fuel towards areas indispensable for survival. So the brain’s logic is controlled by moods, as meta.

***

FALLING OFF A MOUNTAIN, TOO BUSY FOR THINKING:

Once I was delicately crossing a famous and notorious ice gully equipped just with an ice axe and rock climbing slippers. At the worst moment, I looked up, and saw a cloud of rock silently forming up in the sky, 600 meters higher. I started to run, in the hope of reaching the rock on the other side first. However, the avalanche from the partial collapse of said mountain hit my ropes just as I made it to a vertical slab. (The shoulder of that mountain entirely collapsed later, a famous case in Chamonix).

Torn off rock holds, I fell off, facing certain long and painful demise down the mile high gully of death (and the death of my partner, who had a lousy belay. From cracks in the one and only mineral block in that ice gully). I had a last thought: not only was I airborne, but I was dead, that was it, survival probability was strictly zero.

However my brain, in a miraculous feat I cannot not believe, to this day, succeeded to block me between vertical walls, one of ice, the other of granite, in a chimney position. All the more remarkable as I had only rock slippers (not mountain boots). The amount of unbelievable precision and giant neuronal power to unleash colossal force to stop the already long fall was only possible because all my brainpower was applied only where it mattered.

There was no thinking whatsoever. Actually it’s clear that after I had the thought that I was going to die, fir sure, the brain shut down all and any thinking. Consciousness was useless, it just stood in the way, so there was none. Pain and fear did not exist: they were irrelevant.

Thinking, consciousness, pain and fear were obviously completely shut down. All that was left was tremendous will power, enormous mathematical power and the capability to generate an enormous action potential in millions of motor neurons to create gigantic force.

After I stopped in other inhuman feats, I jumped out of the chimney position, grabbed rock and solo climbed ten  meters up to a terrace. It felt like jumping up. When I got to the terrace, and looked at lots of abraded arms, I just could not believe what had happened.  I still do not.

Cogito, ergo sum“, said Descartes. But where does cogito, ergo and sum fit in this gory scene? Nowhere.

Superstitious people who love slogans would just say that “God” took over. Whatever kicks their simplicity.

Clearly what happened has been related many times in similar incident: all my brain’s energy got concentrated exactly where it could make a difference, in a particular application of elementary mechanics, with maximum motor neuron power. Completely extinguishing the rest of brain activity.

Many years ago, a famous solo French sailor, Alain Colas, was in a race in the middle of the ocean. A loop of rope suddenly snapped around his ankle, and nearly completely severed his foot, causing severe blood loss. He had to make a tourniquet to save his life, administer first aid, then bring down his sails, on his giant boat, also to save his life, then try to give the alert. All of this while dragging foot and nerves on the deck. But he did not feel the pain, and he did not go into shock. That happened only when he was done with the essentials.

Anybody who is real hard and has experienced the grand outdoors hundreds of time, will have a similar story to relate.

***

MINIMUM INTROSPECTION SHOWS EXISTENCE, & FEELING COME FIRST:

Waking up from total exhaustion one has first the sensation of existing (“I am!”, or: “I seem to be!”) , well before one starts thinking anything remotely organized, or logical. That could certainly be proven by e-m brain studies, BTW.

Somebody in very deep coma demonstrably exists, while often not being in thought, deep or not.

Actually anybody familiar with heavy exercise knows they can reach points where he or she is, but do not too well what anything, including themselves, is all about. They are, but they don’t really think. So being precedes thinking, elaborated or not. When I run uphill at 3,000 meters for more than fifteen minutes, it tends to do this to me, for example.

Moods provide (part of) the context that a logic needs. How does a baby learn the meaning of words? Not from a dictionary, but from emotions. Emotions come first, they provide the semantics of the world, for any growing human mind. I should go back in the essay and point that out, so thank you Paul!

Thus, at first sight, it’s amazing Descartes, an army captain, could make such a mistake. Did he have an agenda? He did.

***

DESCARTES, OR MACHIAVELLISM SERVING EXISTENTIALISM?

I am tough on “Cogito Ergo Sum”, but I should not be so on his author. Indeed there are twists in this story.

Three centuries after Descartes, Sartre, raising the flag of so called French Existentialism, claimed that existence precedes essence (l’existence précède l’essence”). That reverted the philosophical view that the essence nature of something is more fundamental and immutable than its existence (Aquinas defined god as the thing where existence = essence…). So, if one thinks of the essence of man, as one should, to be thinking, then Sartre was (unwittingly?) saying that thinking was emergent.

Descartes was a genius, if there ever was one: he invented analytic geometry, making calculus possible. So why did he say something as absurd? Well, if man existed just from his thinking, it was not because of God.

Descartes’reasons were grounded in anti-theocracy, subtlety and the advancement of civilization. His new aphorism, “Cogito Ergo Sum“, was iconoclastic.

But iconoclasm yesterday, doctrine tomorrow. Compare the way Descartes broke new ground with his aphorism to the return to primitive theocracy of a modern celebrity such as Václav Havel advocates. Said that otherwise very honorable one: “… one great certainty: Consciousness precedes Being, and not the other way around, as Marxists claim…”. Havel would go oncondemning ours as “the first atheist civilization“, which “has lost its connection with the infinite and with eternity“.

Descartes’ mood was to go where no mind had gone before. Neo-conservatives are rather in the mood of going back again where the logic has thoroughly proved not to be sustainable. No wonder the birth rate is collapsing in such parts.

***

Patrice Ayme