Abstract: I propose to search for truth in a different way, which is more appropriate to potential catastrophes. Political application of this new method will be life saving, on a planetary scale. The new method, being intrinsically teleological, is closer in spirit to Lamarck than to Descartes. One adds the probability obtained by a refined scientific reasoning, to the probability computed by assuming that all which one can imagine could go wrong did (I call this the philosophical probability, as philosophy does not reason from the mediocre median, but from the exceptional case).



I promote a new way of establishing certainty which should have tremendous impact on how decisions are taken, especially in politics. There is plenty of evidence that the Obama administration, far from following such a method, is taking decisions the old fashion way, with catastrophic consequences presently unfolding.

Humanity is at the crossroads between a radiant future, and a holocaust of six billion, after ruining the planet within a generation or two. The choice is now. It depends upon finding truth, and plenty of it. Probabilities are a big help to ascertain truth. But the usual method for computing probabilities is fatally flawed, and leads people astray by creating false certainties galore.

The probability computation of a rare event from a complicated theory should be added to a new term, the probability that the theory itself is wrong, multiplied by how probable it is to get the same rare event, should it be so. Running to this new term is how CERN physicists answered their critics about destroying the world with their accelerator. They did not trust their own entangled mathematical theories (GUT, QCD, and General Relativity).

Because a complex theory has a high probability to be false, the new factor should dominate when probabilities are low, the sort of probabilities characteristic of catastrophes. The new factor is conventionally ignored. We suggest to use inverted, catastrophic logic to evaluate it.

The conventional term is built with rigorous mathematics, it depends upon complicated data. It is organized like clockwork. But, like clockwork, it is good only if all its ingredients, empirical and logical, are 100% exact. A tall order. The logic breaks down if a piece is dead wood, instead of refined crystal.

Not only that, but the conventional term can be called the scientific term. Science is common sense articulated around elements of reality that independent probability analysis has established as true, certain, irrefutable. Thus this conventional first term is built only with elements of perceived reality and conventional logic that are well known, and can only lead to a conclusion that is well known.

But catastrophes are not well known before their time.

The new term is built with the philosophical method. One can call it the philosophical term. The philosophical method is common sense, articulated around potential elements of reality that destroy the preexisting paradigms. Those elements of potential reality may have been perceived, or guessed, but they have not been tested enough  to be considered certain. They could have been perceived just once. But these elements, should they turn out to be real, are game changers (an example below are dark comets; famous examples of  game changers in physics are the double slit experiment (Young), the electric current generated by moving magnet (Faraday), and the photoelectric effect (Hertz)). Of course game changers are why there are catastrophes, so the philosophical term is more appropriate to look out for potential disasters.

The philosophical term uses as ingredients what could go wrong, boosted with maximal imagination, and weighing more gravely the riskier outcomes and ominous warnings.

So one may abstract this new approach to risk evaluation with the following equation: risk of catastrophe = [scientific probability] + [philosophical probability].




Conventional probabilities twist logic perversely. The tendency is to compute what can easily be computed, while ignoring the rest.

An example is given by studies of potential collisions of celestial bodies with our planet. To compute such a probability, scientists naturally look at SITUATIONS THEY CAN COMPUTE WITH. That means asteroids. Asteroids can be seen. Never mind if there is something worse lurking out there. OLD PROBABILITY THEORY MEANS OUT OF COMPUTATION, OUT OF MIND. Asteroids are space rocks on well established trajectories. The reason that they are well established, is that their former colleagues collided with the planets already. The survivors are well behaved, they follow intricate avoidance trajectories in resonance with the planets. Hence collisions with asteroids are rare. Accordingly, low probabilities are found.

Another reasonable approach to estimate collision probability seemed to be to look to the ground, and search for impacts, and estimate how many collisions there have been. The problem with that approach is that it also selects for the same space rocks, because only big rocks can reach the ground. The end result is that two probabilistic approaches come up with the same probability (roughly), giving a false sense of security. Both approaches are biased towards asteroids. One bias was computational (asteroid trajectories can be computed), and the other bias is observational (asteroids can be seen and their impacts can be observed).

The correct catastrophic question is not to ask how often a space rock would hit, but instead how often the earth is likely to be hit by something. So catastrophic logic asks: what could that something be? The philosophical method gives an immediate answer. PHILOSOPHY GUESSES GENERAL RULES FROM RARE EXAMPLES. (Whereas science makes laws from the systematic return of the same.)

In 1983, comet IRAS-Araki-Alcock passed within 300 times the diameter of the Earth. That was the closest known approach in 200 years by such a big object. It was detected only 2 weeks out, because it was so dark. Comet IAA had only 1 percent of its surface active. It was going at a relative speed of 44 kilometers per second, and its impact with Earth would have caused an explosion of 200 million Megatons of TNT. Yes, more than ten billion times Hiroshima. Most of civilization would have been taken out. Preceding statistics had underestimated such an occurrence by at least one hundred times.

Comet Borrelly, visited by NASA’s Deep Space 1 probe in 2001, was found to have extremely dark patches over much of its surface. The enormous explosion a century ago at Tugunska in Siberia, enough to kill 30 million people if it happened nowadays over a mega city, was probably a piece of comet (besides the fact that it exploded at 8,000 meters, leaving no debris, showing it was not very stony, it happened just the day when the Earth was crossing a well known meteoritic stream made of comet remnants).

There is some evidence of a double cometary impact off Indonesia and Norway during the Sixth century, more exactly in 536 CE, with nefarious worldwide consequences (frosts, darkness). A huge trailing fireball over North America 12,900 years ago, scorching half of the continent, may have blazed across the skies too (archeology finds a soot layer, and micro diamonds, continent wide, with massive changes all over).

So this is a case where finding the probability of whatever can be computed has been far from helping. Planners have years to consider any scary asteroid. So they have not been too worried about asteroids. But asteroids are not the problem.

A planetary defense system with warnings of only a few hours, should instead be set up to be ready to handle comets. A thermonuclear armed missile could certainly do away with a Tugunska style impactor, even with an hour’s warning (a 100 meters across cometary fragment would not be dangerous if blown to pieces). In any case, a serious theory of dark comets is necessary. Some scientists (Napier and Asher, 2009) are pointing out that a bit of (philosophical) thinking shows that comets become dark as they age. So many may be lurking out there. This is the correct approach: instead of computing asteroids to no end, thinking outside of the box.


The best example of a huge philosophical mistake is given by the Greco-Roman civilization. That civilization had decided to develop slavery instead of higher technology, condemning their civilization to no exit. (The Franks corrected that mistake, by outlawing slavery at the outset.)

Another philosophical mistake arose from the young British PM  hostility to the French revolution (which, after all, was just following in the footsteps of the earlier English revolutions, in which Louis XIV of France refused to partake, although the English king begged him to.) That unjustified anger brought 25 years of wars all over Europe, and many millions dead. (British Prime Minister Lloyd George recognized the Pitts for what they were, really the pits, 130 years after the facts.)

The USA has made colossal philosophical mistakes in the last 40 years (many of them reminiscent of the Greco-Romans, because they overemphasized the exploitation of man by man as the major engine of the economy, such as when Nixon decided to make health of Americans a profit center.)

USA president Obama has an armada of scientific advisers (or at least 4). All together they have pushed for a colossal (120 billion dollars says New Scientist) stimulus in science and technology. That is an excellent decision, very favorable to civilization. As long as it has legs (and is not just a jerk to the system).

But Obama has no philosophical adviser. This reflects the deeply erroneous belief that philosophy is so easy, any lawyer can do it.

In truth, it’s the other way around: precisely because it deals with certainties, it is science that is much easier than philosophy. Science looks like magic to the commons only because of massive deficiencies in the educational system.

But scientific progress gets really hard when it needs new philosophy to progress. ANY MAJOR SCIENTIFIC PROGRESS WAS FIRST OF ALL A PHILOSOPHICAL JUMP. There are no exception, “major philosophical jump” is nearly the definition of “major scientific revolution”. And this extends to mathematics: any ultra major mathematical advance was first of all a philosophical jump. (Those who screams too loud at this point may calm down if submitted to the intricacies of the foundations of Quantum Mechanics; in QM, not even light moves at the speed of light: see Feynman’s “QED”, page 89.)

Philosophy is much harder than science, because it is common sense applied to rare or heretofore undescribed events or patterns. It is the hard edge of science, far out. Newton knew this. So did Einstein. Lamarck was the discoverer of the theory of biological evolution, the discoverer of the immense age of the earth (from studying the evolution of clams), and the discoverer of the order of invertebrates, among other things. Although Lamarck was one of the world’s first biology research professor, and although he was very famous during the French revolution, as conservatism returned, he became the butt of jokes, died in misery, and his spirit ate crow for two centuries after that. Why? Because, as a scientist of civilizational class, he had stepped on the philosophical toes of conventional wisdom. And so did his science. Several celebrated British students of Lamarck (Wallace, Darwin) could not restore their teacher’s post mortem reputation. Lamarck kept on being the object of ridicule, until, on Darwin’s 200th birthday, epigenetics came back in force.

Studying philosophy and history or sociology is in full contradiction with the official philosophy of the USA, the fact Americans are supposed to “trust in” and “under” God. So philosophy and generally examining life is done much less, and the incapacity of conducting the most elementary logic reigns. When asked why he does not nationalize the banks, Obama and his entire government revert to the logic of the cave people, or as bin Laden would say: “it’s not our culture, it’s not traditional, we have more than five banks…” Silly juxtaposition, I know, but telling that it can be done. No pride of rationality, no glory in the human spirit: in God we trust, under God we are…

And sure enough, Obama has not reached philosophical clarity in Iraq and Afghanistan. Instead he pursues the exact same mistakes of his predecessor, with renewed military enthusiasm. Why? Because he has not evaluated the enormously high probabilities for the disastrous outcomes the USA is cruising towards, in both cases. How am I so sure? Well, because the catastrophic scenarios in both cases are all too likely and way too stable in their catastrophic natures.

If he wanted to go a good job of reflection, Obama would have to reason catastrophically, and backwards out of the imaginable pitfalls. But he may be unable do this anymore than he has been able to do it with his tiny little bank problem that he is ruining the world’s economy with.

To think that Iraq will not someday take its vengeance and stabs the USA is in the back for what it did, is the sort of naivety those who have never lived history are affected by.

So the USA is sinking in depression, while its army tries to ravage the Middle East. It’s like Roman history never happened. Shame. Remember Julian, Barack, and stop before knowing an ominous fate. Do not ask what God will do for you, imagine what catastrophe will do to your country. Please reason backwards from the worst possible outcome.

What we just saw is that planetary defense, “Spaceguard”, is oriented towards unlikely asteroids because asteroid theory is easier. Whereas the real danger is Dark Comets. This was pretty obvious ever since the Tugunska explosion. To be alarmed by this one has to use the philosophical method (because it was a one time event, just as comet IAA was a one time event).

Policy decisions and legislation depend on probabilities. Any probability of an event “E” is computed using a reasoning. the probability P depends upon the reasoning, R. P is justified only if the reasoning R is correct. So the real probability of E is not P, but the product of P with the probability p[R] that R is correct. So it’s [(P) (p[R])]. Now there is also a probability that the reasoning R is false and that the event E happens nevertheless. Call that Q. In the end the total probability of E is: P (p[R]) + Q (1- p[R]) = (P – Q) p[R] + Q.

Notice that the more complex the theory R, the higher is the probability p[R] that the reasoning R will be false, and so the probability coming out of such a reasoning R, if it very small, is irrelevant. Some critics have used this approach to claim that the CERN accelerator in Geneva is dangerous. The reasoning R, in the case of CERN, is QCD, plus dubious science such as the physics of black holes. There is no way that this unholy assemblage is all true. Conscious of this, CERN physicists changed tactics, and discarded the basic theory behind their accelerator to answer their critics.

If P is very small (like the probability of blowing up the world by turning on the CERN LHC accelerator), and Q is not, P becomes irrelevant. The probability of the event E is then roughly Q (1 – p[R]). Now a complicated theory like QCD (the theory behind CERN) gives us only a very low probability that R is right. So p[R} is roughly zero too. Everything gets dominated by a probability that has nothing to do with what mathematically oriented statisticians look at usually.

Perhaps intuitively guessing the preceding, instead of playing the game of going into the probably erroneous details of modern physics, CERN physicists argued that, even if the theory were false, the probability of an adverse outcome was insignificant, because the universe has collisions at these sorts of energies, and some million times more powerful, all the time (so the universe would make mini black holes and strange matter, all the time, if they could be made at these energies; since it does not, these energies are safe). This was the correct reasoning: not from a dubious theoretical ground up, but from heavenly evidence down.



OK, let’s focus minds with a particular case; the probability of a catastrophic outcome for greenhouse warming, turning it into global heating. It has been computed many times. Each time the results are worse, and there is a good reason for that.

What happens every time is that each greenhouse probability computation depends upon greenhouse gases. The most well known is CO2 (Carbon dioxide), the next one, a greater threat, is CH4 (Methane). Another powerful greenhouse gas is H2O (Water), as anybody who has spent a night in the desert will testify. The hotter it gets, the more H2O goes up, and the more CH4 comes out of permafrost, so the greenhouse effect augments non linearly (the more it augments, the faster it augments).

There are other greenhouse gases, found in minute quantities, but they are much more powerful as warming agents. Methane, over 100 years has 25 times the Warming Potential of CO2, and Nitrous Oxide, 298 times. Over ten years, the warming potential of methane is much higher. Incorporating those one gets 15% of the warming. But recently some gases were found that had the Warming Potential of 20,000 times CO2 (yes, twenty thousands times). This changes seriously the probability of massive heating. That is the type of situation where a probability computation fails because of missing ingredients. The reasoning R is not false intrinsically, but with not all the data in, it becomes so.

So what do I suggest? Well, start with what we know that could go wrong. That is exactly the most complete initial conditions to evolve logic from. One does not expect anything less from an airplane pilot. If she started with hoping for the best, and only considered what she can compute, instead of preparing for the worst, and the unexpected, she would not live very long. Nor would we.



As I said above, Spaceguard has been looking at asteroids because asteroids can be computed. The same occurs with “Global Warming”. The IPCC (International Panel on Climate Change) did not incorporate  the possible melting of Greenland and Antarctica in its predictions. This is astounding, because this is the most dangerous proximal effect of global heating. Greenland is all white and sticks low in the blue Atlantic ocean, it does not require much brain power to decide it will melt. If it does, it’s 6 meters of water…

Why did the IPCC neglect this melt? Well, you guessed it: too hard to compute. So the IPCC ignores it outright, like Spaceguard ignored dark comets outright.

The correct catastrophic probability approach is to observe that total melting has happened before, and if it happen again, it would raise oceans by 72 meters. Thus it’s possible, and a great danger, so it should be viewed as an end one wants to avoid. Plus there is also the corroborating danger is that the oceanic level is rising 3 mm a year, faster than the IPCC projections.



There can be logical flaws in the most precise reasoning. Mathematics is full of perfect reasonings that were found wanting later on. More exactly, by changing the logic R, much more was found later. It was not that R was really “false”. No, R was often found to be incomplete, to rest on unjustified, non explicit steps. (This is why it has been found hard to get computers to check mathematical proofs: they are not that logical, after all!) Overall, mathematical reasonings often worked not just from logic, but also from convention, and tradition.

Example. It was long thought there was no number that, multiplied by itself, would produce a negative number. It was a sort of reasoning by inspection: the probability of seeing such a number was zero, well, because no one had seen one before. BUT INSPECTION OF A BOX DOES NOT MEAN THERE IS NOTHING OUTSIDE OF THE BOX.

Indeed, an Italian surgeon-mathematician, Cardano, came around and solved some equations by assuming during the computation that there were numbers with negative squares. It turns out that there was just a psychological block: if one assumed such numbers with negative squares existed, the whole world was easier to interpret, and, ever since one has assumed that they exist, and these numbers allowed to visualize better electricity, electromagnetism and Quantum mechanics. Mathematicians now view these “complex numbers” as most natural, and physics cannot be done without.

The same happened with curved geometry. Although it is everywhere, and it preceded Euclidean geometry, Euclid did to it what it did to the zero (another Greek invention that had to emigrate to India to be treated well). Euclid had the reasoning bias: he admitted as a worthy object of study what he could present a lot of logic about, instead of making the philosophical jump to realize that he was after small logical fry, instead of really big game.



False logics can lead to true results. An example is infinitesimal calculus (in its original Leibnitz version). As Berkeley pointed out, the logic made little sense. But certainly the results were true all over, and soon used in engineering. It would take three centuries, and a big advance in logics (model theory) to justify Leibnitz’ infinitesimals directly; Cauchy gave a different rigorous version of calculus, using limits, 150 years after Leibnitz’ initial invention.

An example of a falsehood bringing truth, from Physics: the establishment by Maxwell of the equations for electromagnetism used the hypothesis of the luminiferous ether. That was shown later not to exist. It came to be implicitly understood that the waves created the space, to some extent. Dirac systematized the madness, by predicting spin and antimatter just this way: he imposed the simplest wave equation on electrons, Maxwell-like, but Dirac did not care if he did not have a (non local) space for his equation.



So what to deduce for policy making? Well, when trying to compute the probability of some catastrophe, one should start from the worst possible conclusion, and try to find out if there was a way to get, backwards, from there, to the present conditions. That is how the CERN physicists argued: there was no way to get to a catastrophe at CERN, because the universe is trying all the time, and fails to get there, namely to a catastrophe.

In climate research, the opposite is found; the planet Earth was often in its HOT MODE. So it certainly can get there. In that mode, dinosaurs and crocodilians lived in the polar regions. The tipping point out of the hot mode is the concentration of CO2 below which Antarctica freezes over (~ 425ppm CO2 equivalent). This is below a level of greenhouse gases we have already passed, on the way up. So we could have utter planetary destruction in 30 years, we are in mortal danger, and one should bring CO2 emissions to zero right away, because CO2 emissions is how we got there. Instead Obama is proposing to “capture carbon”, a wild goose chase where the mentally challenged nebulously runs around emitting gas, hoping for the best.

In the case of Afghanistan, or the Middle East and South Asia in general, the worst outcomes are absolutely terrifying: they involve nuclear war, and even nuclear world war. That’s bad. But what is worse is that it is easy to build scenarios to get from there to where we are, and conversely. Thus policy searching for better outcomes should be focused on poisoning these scenarios before they can unfold. Much fewer nukes worldwide is an obvious solution.

In economy, the advisers of Obama view as “improbable” that the economic conditions will get to 20% further down in housing prices, and more than 10% unemployment. They define those as “depression conditions”. Of course at the present rate of collapse, they will be there in three months, but they say no, because they say it would be improbable. It is true that it did not happen in the last 70 years, so making a theory from that duration, it can’t happen. (So, having decided it will not happen, they do nothing to prevent it, and one can safely predict it is going to be a disaster!)

Far from this circular logic, the catastrophic approach is to extend present trends, and realize that a deep depression is likely. Then, having realized the enormity of the disaster at hand, one has to go backwards, and see what could derail that scenario.

Well, the only thing that could derail catastrophe is basically what the advisers of Obama refuse to do: dump the rich, refuse to keep on sending them taxpayer money, and use the money of the People to give cheap housing, cheap health care, and jobs, jobs, jobs.



The old fashion way of computing probabilities ignores the fact that any probability computation is in a logical universe, and that the probability that this logical universe represents all relevant ingredients for the situation at hand has itself to be evaluated.

One just needs a glance at the crashing world financial system to see what fatally flawed means. In modern times manmade disasters that had been viewed as very unlikely have been happening with increasing frequency. Examples are a few holocausts, two world wars and two great depressions (without counting the present one).

On a slightly larger time scale, there were important break downs of the Western moral code. The wars that happened around the French revolution were totally unnecessary, and killed millions, in France, and more, around Europe. After the initial hatred of the Old regime against the Rights of Man had caused an invasion of France, the blood kept flowing for 25 years because one “unlikely” event followed another. By contrast vast periods in history have been very calm (some are found in the Middle Ages!)

In North America, the American Revolution brought more slavery for Africans and holocaust for the Native Americans. That would have been viewed by the proponent of the Enlightenment, who wanted American independence, as another “unlikely” event. How likely was the American Civil War, the bloodiest civil war known? And how likely was it that Great Britain would imitate USA methods in South Africa as it forced the (white) Boers to surrender their republic?

Important advances were made in logic in the twentieth century. A basic trick was used throughout, GOING META. Going meta makes a theory of the theory, it manipulates it from the outside. The same basic trick can be carried over in the realm of computing probabilities. The disregard of the moral code by the Nazis or the USA (official reintroduction of torture) were unimaginable. They looked totally improbable before they occurred.

The usual approach to what is probable is dangerously misleading, because it WILL ALWAYS underestimate catastrophically a potential catastrophe (strategic, nuclear, financial, climatic, etc.). This new meta law of probability is pernicious: it says that, the worst the potential catastrophe, the rarer it will be, and the more the usual way of computing probability of said catastrophe is logically flawed.

So what to do I suggest to do when trying to find how probable a catastrophe is? Well, I propose to backtrack from the catastrophe itself: do not ask your theory how unlikely catastrophe is, ask catastrophe what it can do for your theory. As I showed, knowing dark comets exist, and knowing that so does the hot mode of Earth’s climate, radically augments the probability of an unhappy outcome.


Conclusion: The way truth theory is generally used is wrong. It ignores that any logical system carries its own truth inside, but that this truth has no value outside. Conventional probability computations of catastrophes are flawed, or, more exactly, meta-logically flawed. Yes, there is such a concept as “metalogical” and it led to the greatest advances in logic in 2,500 years. In metalogics, one applies logic to a logic from the outside. Any logic has countless metalogics surrounding it. Incompleteness theorems in logics say that there are a number of ways to build them up. To be logical is necessary, to be metalogical is prudent. Probability without metalogics is folly.

The philosophical probability weighs these metalogics according to the threats they represent. So it proceeds from the end (telos: end, goal, result). That’s why it is teleological. “Teleological” has not been philosophically very popular in recent centuries, because it smacks of God, intelligent design, or animals thinking about how they should evolve (the later being the oft made parody of Lamarckism).

But a moment’s thought reveals that all intelligence is teleological: the animal wants to save its life, so it rushes into the hole. It does not rushes into the hole first, and then wants to save its life. The flight in the hole is teleologically motivated: any Cretaceous critter fleeing for its life has not experienced yet catastrophic doom between the jaws of T Rex, five meters away. But it reasons BACKWARDS from a theory of catastrophe it has.

So, in the end, I am just suggesting to be a bit more systematic about something  the ancestors of rats understood 200 million years ago.

RISK OF CATASTROPHE = [SCIENTIFIC PROBABILITY] + [PHILOSOPHICAL PROBABILITY]. But the worst it can get, the more the term that dominates is the philosophical one.


Patrice Ayme.



Addendum: More on Spaceguard.

Of course, ideally, any impact warning system should have a capacity for tracking and changing orbits of potentially harmful near-Earth asteroids and comets years ahead of impact. Unfortunately we don’t know how to do this, and years may not be available: cometary trajectories can be both hidden and chaotic (as they fall off the Oort cloud or graze giant planets).

The kinetic energy of the comet IAA was 44 squared by unit of mass, that is 2,000 (two thousand) per unit of mass times that of a speeding bullet. Hard to deflect: we could do little  with a comet within 2 weeks of impact, considering that its mass is of the order of a thousand billion tons.

But what happens if sometimes around 5:47 am on some month of June in the close future, as we cross the same comet debris stream again, astronomers discover that a Tunguska sized impactor is within two hours of hitting New York City? The Tunguska object was between 30 and 50 meters across, and exploded with about 15 Megatons of TNT equivalent (1,000 times Hiroshima), flattening 2,000 square kilometers of forest (an area greater than Washington DC).

Interestingly, we could do something about it, with existing technology.  Any explosion of a thermonuclear bomb in the vicinity of such an object, more than ten seconds before impact, should be mitigating (even with the EMP, as long as people close their eyes!).


Final word from T.S. Eliot:

“What we call the beginning is often the end
And to make an end is to make a beginning.
The end is where we start from.”



  1. The Worst. « Some of Patrice Ayme’s Thoughts Says:

    […] I have advocated the catastrophic calculus instead of the usage of probability on important matters. This is very practical, and the applications go from Pakistan, to building codes, to the so called "climate change" (where "The Economist" just adopted the same exact reasoning in its lead editorial) to what the French state and law call the "Principle of Precaution". […]


  2. Ten Years To Catastrophe | Patrice Ayme's Thoughts Says:

    […] So the IPCC operates on a 40 year time-frame for total catastrophe, while I claim that the time scale is only ten years. The IPCC got there by being optimistic. I got to only ten years by being pessimistic. I used what I call Catastrophic Calculus. […]


  3. Nuke That Comet | Patrice Ayme's Thoughts Says:

    […] may happen, it will happen.  Especially if it’s very catastrophic. That’s the essence of Catastrophic Calculus, fundamental to flying planes, and […]


  4. WISDOM IS KNOWLEDGE Tempered By Value | Patrice Ayme's Thoughts Says:

    […] in science and climate. The latter with an eye to policy. I have long advocated that, for climate, catastrophic calculus, computing only with the worst possible cases, was the path to wisdom. Indeed, we now know that, […]


What do you think? Please join the debate! The simplest questions are often the deepest!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: