Archive for the ‘Finite Logic’ Category

What Is A Logic? Just A Piece Of Mind

January 15, 2017

I would propose that a logic is anything which can be modelled with a piece and parcel of brain.

I will show, surprisingly enough, that this is a further step in Cartesian Logic.

At first sight, it may look as if I were answering a riddle, by further mysteries. Indeed, but with mysteries which can be subjected to experimental inquiry (now or tomorrow).

What is a brain? A type of Quantum Computer! And what is Computing, and the Quantum? Well, works in progress. There is something called Quantum Logic, but it does not necessarily defines the world, as exactly what Quantum Physics is, is still obscure.

In practice? Logic is what works, a set of rules to go from a set A of statements to a set B of statements.

In this perspective, Medieval logic did not decline. Instead it transmutated into mathematics.

 The teaching of Logic or Dialetics from a collection of scientific, philosophical and poetic writings, French, 13th century; Bibliotheque Sainte-Genevieve, Paris, France. The 13th century was a time of extreme intellectual activity in Europe, superior to anything else in the world, centered 800 miles around Paris. In particular the heliocentric system was proposed by Buridan, after he overthrew Aristotelian Physics, by inventing and discovering inertia.

The teaching of Logic or Dialetics from a collection of scientific, philosophical and poetic writings, French, 13th century; Bibliotheque Sainte-Genevieve, Paris, France. The 13th century was a time of extreme intellectual activity in Europe, superior to anything else in the world, centered 800 miles around Paris. In particular the heliocentric system was proposed by Buridan, after he overthrew Aristotelian Physics, by inventing and discovering inertia.

An article in Aeon, “The Rise And Fall And Rise Of Logic”,

Reflects on the importance on the history of the notion of logic:

Reflecting on the history of logic forces us to reflect on what it means to be a reasonable cognitive agent, to think properly. Is it to engage in discussions with others? Is it to think for ourselves? Is it to perform calculations?

In the Critique of Pure Reason (1781), Immanuel Kant stated that no progress in logic had been made since Aristotle. He therefore concludes that the logic of his time had reached the point of completion. There was no more work to be done. Two hundred years later, after the astonishing developments in the 19th and 20th centuries, with the mathematisation of logic at the hands of thinkers such as George Boole, Gottlob Frege, Bertrand Russell, Alfred Tarski and Kurt Gödel, it’s clear that Kant was dead wrong. But he was also wrong in thinking that there had been no progress since Aristotle up to his time. According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries. (Throughout this piece, the focus is on the logical traditions that emerged against the background of ancient Greek logic. So Indian and Chinese logic are not included, but medieval Arabic logic is.)”

The old racist Prussian, Kant, a fascist, enslaving cog in the imperial machine turned false philosopher was unsurprisingly incorrect.

The author of the referenced article, Catarina Dutilh Novaes, is professor of philosophy and the Rosalind Franklin fellow in the Department of Theoretical Philosophy at the University of Groningen in the Netherlands. Her work focuses on the philosophy of logic and mathematics, and she is broadly interested in philosophy of mind and science. Her latest book is The Cambridge Companion to Medieval Logic (2016).

She attributes the decline of logic, in the post-medieval period known as the Renaissance and the Enlightenment, to the rise of printed books, self-study and the independent thinker. She rolls out Descartes, and his break from formal logic:

Catarina writes: “Another reason logic gradually lost its prominence in the modern period was the abandonment of predominantly dialectical modes of intellectual enquiry. A passage by René Descartes – yes, the fellow who built a whole philosophical system while sitting on his own by the fireplace in a dressing gown – represents this shift in a particularly poignant way.”

Speaking of how the education of a young pupil should proceed, in Principles of Philosophy (1644) René Descartes writes:

After that, he should study logic. I do not mean the logic of the Schools, for this is strictly speaking nothing but a dialectic which teaches ways of expounding to others what one already knows or even of holding forth without judgment about things one does not know. Such logic corrupts good sense rather than increasing it. I mean instead the kind of logic which teaches us to direct our reason with a view to discovering the truths of which we are ignorant.

Catarina adds: “Descartes hits the nail on the head when he claims that the logic of the Schools (scholastic logic) is not really a logic of discovery. Its chief purpose is justification and exposition.”

Instead, Descartes claims and I claim that a new sort of logic arose: Medieval Logic transmuted itself into mathematics (Descartes does not say this, but he means it). And mathematics is not really logical in the strictest sense. As it has too many rules to be strictly logical.

Buridan, a great logician who studied well the Liar Paradox (which gave the Incompleteness Theorems) had students such as (bishop) Oresme, who demonstrated what, it turned out, were the first practical theorems in calculus (more than 2 centuries before the formal invention of calculus by Fermat, and Fermat’s discovery of the Fundamental Theorem of Calculus, that integration and differentiation are inverse to each other).

For example, under the influence of Buridan and then Oresme, graphs and later equations themselves were invented. So logic became mathematics. That was blatant by the time Descartes invented Algebraic Geometry. Algebraic Geometry gave ways to deduce, to go from a set A to a set B, using a completely new method never seen before.

In turn, by the Nineteenth Century, mathematical methods contributed to old questions in Logic (the most striking being the use of Cantor Diagonalization to show incompleteness, thanks to the Liar Paradox, self-referential method.

In this spirit, not only Set Theory, naive or not, but Category Theory can be viewed as types of logic. So is, of course, computer science. Logic is whatever enables to deduce. Thus even poetry is a form of logic.

Logic is everywhere there is mental activity, and it is never complete.

If logic is just pieces of brain, then what? Well, some progress in pure logic can be made, just paying attention to how the brain works. The brain works sequentially, temporally, with local linear logics (axonal and dendritic systems). The brain tends to be deprived of contradictions (but not always, and nothing infuriates people more, than to be exposed to their own contradictions and gaps in… logic). Also all these pieces of brain, these logics, are not just temporally ordered, but finite.

As we try to use logic to look forward, as a bunch of monkeys messing up our space rock, it is important to realize that what logic is, has not been properly defined, let alone circumscribed. Indeed, if, surprise, surprise, logic has not been properly defined, let alone circumscribed, much more is logically possible than people suspect!

Patrice Ayme’


Axiom of Choice: Crazy Math

March 30, 2014

A way to improve thinking is to imagine more, and be more rigorous. What a better place to exert these skills than in mathematics and logic? Things are clearer there.

The crucial Axiom Of Choice (AC) in mathematics has crazy consequences. After describing what it is, and evoking some of its insufferable consequences, I will expose why it ought to be rejected, and why the lack of a similar rejection, at the time, in a somewhat similar situation, may have help in the decay of Greco-Roman antiquity.

This is part of my general, Non-Aristotelian campaign against infinity in mathematics and beyond. The nature of mathematics, long pondered, is touched upon. A 25 centuries old “proof” is mauled, and not just because it’s fun. There is deep philosophy behind. Call it the philosophy of sustainability, or of finite energy.

Intolerably Crazy Math From Axiom of Choice

Intolerably Crazy Math From Axiom of Choice

The Axiom of Choice makes you believe you can multiply not just wine, fish and bread, but space itself: AC corresponds, one can say, to a wasteful mentality.

The Axiom of Choice says that, given a collection C of subsets inside a set S, one can consider that a set exists, made of elements, each one of them is an element in exactly one of the subsets. That sounds innocuous enough, and obvious. And obvious it is, if one thinks of finite sets. However, if C is infinite, it gets boringly complicated.

Moreover, AC has a consequence: given a unit sphere, one can cut it in disjoint pieces, and reassemble those pieces to build two unit spheres. Banach and Tarski, both Polish mathematicians working in what’s now Western Ukraine, the object of Putin’s envy and greed, demonstrated this Banach-Tarski paradox. It’s viewed as an object of wonder in General Topology.

I prefer to view it as an object of horror. (The pieces are not Lebesgue measurable, that means not physical objects. Such non measurable objects had been found earlier by Vitali and Hausdorff)

Punch line? The Axiom Of Choice (AC) is central to all of modern mathematics. Position of conventional mathematicians? The fact that AC is so useful, all over mathematics, proves that AC can be fruitfully considered to be true.

My retort? Maybe what you view as fruitful mathematics is just resting on a false axiom, or, at least one against nature, and thus, is just plain false, or against nature. One may be better off, studying mathematics that is not against nature..

As I showed earlier, calculus survives the outlawing of infinity in mathematics. That pretty much means that useful mathematics survives.

You see a problem with mathematics, even the simplest arithmetic, is that, once one has admitted the infinity postulate, thanks to the Cantor Diagonal process, one can always find undecidable propositions (this is part of the Incompleteness Theorems of mathematical logic: Gödel, etc.).

That means a field such as Euclidean geometry is infinite, in the sense that it has an infinite number of non-provable theorems. Each can be decided both ways: false, or true. Each gives rise to two mathematics.

Yet, even modern mathematicians will admit that studying Euclidean geometry for an infinite amount of time is of little interest. Proof? They don’t do it.

Yet, what’s the difference with what they are doing?

Mathematics is neurology, and neurology can be anything, but infinite. Think about what it means. Yes, mathematics is even cephalopod neurology, with the octopus’ nine brains. Fractals, for example, are part of math, but far from the tradition of equating angles or algebraic expressions.

It’s a big universe out there. The number one consequence to draw from the history of science, is that scientists make tribes. Quite often those tribes go astray… for more than 1,000 years (see notes). Worse: my making science, and, or mathematics, uninteresting, they may lead to a weakening of public intelligence.

I would suggest that effect, making science, and mathematics priestly and narrow minded, contributed to the powerful anti-intellectual tsunami that struck the Roman empire.

Greek mathematicians had excluded all mathematics as unworthy of consideration, but for a strict subset of “Euclid’s Elements” (some of the present Euclid Elements were added later). The implementation of those discoveries were made by others (Indians, and to some extent, Iranians and Arabs).

It turned out that these more practical mathematics, excluded by Euclid, because they were viewed as non rigorous and primitive, led to deeper and more powerful insights.

The irony was that Euclid’s Elements, in the guise of rigor, were using an axiom that was not needed, in general, the parallel axiom. That axiom, by supposing too much, killed the imagination.

I suggest nothing less happening nowadays, with the Axiom of Choice: it’s one axiom too far.

Patrice Aymé

Technical notes:

Up to a recent time, if one was not a Supersymmetric (SUSY) physicist, it was impossible to find a job, except as a taxi cab driver. There was a practical axiom ruling physics: the world had got to be supersymmetric.

Now the whole SUSY business seems to be imploding as the CERN’s LHC came up empty, and it dawned on participants that there was no reason for an experimental confrontation in the imaginable future… I have studied SUSY, and I have a competitive theory, where there are two hints of experimental proofs imaginable (namely Dark Energy and Dark Matter).

I said the AC was one axiom too far, but actually I think infinity itself is an axiom too far. I exposed earlier what’s wrong with the 25 centuries old proof of infinity (it assumes one can use a symbol one cannot actually evoke, because there is no energy to do so!).

The geocentric astronomy ruled from Aristarchus of Samos (who proposed the heliocentric system, 3C BCE) until Buridan (who used inertia, that he had discovered to make the heliocentric system more reasonable; ~1320 CE; Copernic learned Buridan in Cracow, Poland). It could be viewed as an axiom.

Hidden axioms are found even in arithmetic, for example the Archimedean Axiom was used by all mathematicians implicitly, before Model Theory logicians detected it around 1950 (it says, given two integers, A and B, a third one can be found, D, such that: AD > B; if not fulfilled one gets non-standard integers).


November 27, 2013

”Information is physical”. Always. Of course. What else?

Yet, the mystery is far from dispelled, as we don’t know what “physical” is. We don’t know, what physics is, for sure. Some roll out the Quantum, and say:”here is physics: it from bit”. However, we are not certain of what the Quantum is (= we don’t know whether quantum theory is “complete” or not; ultimately it’s a Physical Problem, experimentally determined; Von Neumann thought he had a “formal” proof, but he was wrong).

Are there Physical Problems that are not Mathematical Problems? Or Physics Proofs that have not Mathematical Proofs? Well, at this point, there are. Take general fluid flow. Be it water inside a fluid, or a meteor going hypersonic, these Physical Problems exist, and have solutions, that the physical objects themselves are Physical Proofs. It is not clear that they have Mathematical Solutions, let alone Mathematical Proofs.

Theorems From Physics? claims that:

“mathematical theorems are not supposed to be contingent. This is a fancy philosophical term for propositions that are “true in some possible worlds and false in others.” In particular, the truth of a mathematical proposition is not supposed to depend on any empirical fact about our particular world.”

With all due respect, that’s theology. Conventional theology, so called “Platonism”, but still theology. For me Plato, and his modern parrots are seriously obsolete, and “an embarrassment, for these people are friends”, as Aristotle put it.

I can show that the proof that square root of two is irrational contains assumptions made on an empirical basis (along the lines of mn = nm, actually; similarly, the choice between Presburger arithmetic and Robinson, or Peano, or Ayme arithmetics, can be viewed as empirically driven.)

However, what is an achieved mathematical proof? Just a neural arrangement. Similar neural arrangements in the minds of noble primates called mathematicians. Thus, a mathematical proof is a physical object constructed similarly in the minds of many. So a mathematical Proof is a Physical Proof, just as the fluid in a tube is a Proof of a Physical Problem, the flow problem. And similar tubes have similar “proofs”, once similar fluids similarly flow.

So any Mathematical Proof is a Physical Proof.


Patrice Ayme



1) Could Quantum Theory be Wrong?

(Meaning not as perfect as it is taken to be.) Actually the main objection I have against the Quantum-as-it-is is exactly the same as the objection Isaac Newton had against his own theory of gravitation: instantaneous interaction at a distance with nothing between made no sense, said Newton.

(Einstein remedied this partly by proposing that gravitation was a field propagating at the speed of light.)

2) The preceding was a comment of mine on the “Gödel Lost Letter and P=NP” site in Theorems From Physics?

And most notably the following passages: “The philosopher in us recoils dogmatically at the notion of such a “physical proof”…  Imagine that someone shows the following: If P is not NP, then some physical principle is violated. Most likely this would be in the form of a Gedankenexperiment, but nevertheless it would be quite interesting. Yet I am at a loss to say what it would mean. Indeed the question is: “Is this a proof or not?”

Actually this is exactly the general method I used to prove there is a largest number. Basically, I said, if there is infinity, there is a violation of the conservation of energy principle. Oh, by the way, if you want to know, in my system, the proof of P = NP is trivial (as everything is polynomial; four words proof, so I should the Clay Prize, hahaha)…


November 5, 2013


Tyranosopher: Finite Logic should be called Non Aristotelian Logic. As I will show.

Simplicius Maximus, a contradictor: I have two objections to your finite math madness. First it makes no sense, and, secondly, even if it did, it would be pointless. 

Tyranosopher: I love contradictions. I squash them, then drink their juicy parts. OK, bring it on. Let’s start with the contradiction you found. A French contributor, Paul de Foucault, already made the objection that m/0 = infinity. 

Sounds good. However, it violates Peano Arithmetic (PA). PA is the arithmetic common to all metamathematics. But for mine, of course. (I violate much, with glee, including the pairing axiom!)

In PA, a.0 = 0 is one of the two axioms defining multiplication. So we see that if x = m/0, we would have x.0 = m. In other words, m = 0.

That’s not surprising: a number called “infinity” is not defined in PA

Simplicius Maximus: OK, fine. Here is my objection. It’s well known that the square root of two is irrational. Even Aristotle knew this, but you apparently don’t. And then you give the world lessons about everything. You are a charlatan. 

T: What do you mean by irrational?

SM: Ah, you see? It means square root of two cannot be equal to m/n, where m and n are integers. Let’s abbreviate square root two by sqrt(2). Irrational means the expansion of sqrt(2) never ends. 

T: Why? 

SM: Here is the proof. Suppose sqrt (2) were rational. That means: m/n = sqrt (2). Let’s suppose the terms m and n are as small as possible. That’s crucial to get the contradiction. 

T: Fair enough.

SM: Now, square both sides.  

T: That means, more exactly, that you contrive to multiply the left hand side of the equation by m/n and the right hand side by sqrt(2).

SM: Happy that you can follow that trivial trick. That gives us the equation: mm/nn = 2.  

T: As sqrt (2) sqrt (2) = 2. Indeed. By the way, you made an unwarranted assumption, so I view your reasoning as already faulty, at this point

SM: Faulty? Are you going mad? 

T: I will dissect your naïve error later. But please finish, Mr. Aristotle. 

SM: Call me Aristotelian if you wish. Multiplying both sides of the equation by nn, we get: mm =  2 nn. That implies that m is even. Because if m were odd, m = 2u + 1, then mm = 4uu + 4u + 1 , the sum of an even number (4uu + 4u) plus 1… And that, the sum of an even number with one, is odd. Hence m = 2a.

But then 2a2a = 2 nn, or: 2 aa = nn. Thus n is even (same reasoning as before: the square of an odd number cannot be even). So we see that both m and n are even, a contradiction, as we assumed m and n were the smallest integers with a ratio equal to sqrt (2). 

T: This proof is indeed alluded to in Aristotle, and was interpolated much later into Euclid’s elements. The official Greek mathematicians did not like algebra. 

SM: I see that, although you don’t know math, you know historiography.

Tyranosopher: I do know math, I’m just more rigorous than you, august parrot.

Simplicius Maximus: Me, a parrot? Me, and 25 centuries of elite mathematicians who are household names, dozens of Field Medalists are also of the avian persuasion? How can you be so vain and smug? 

Tyranosopher: Because I’m smarter.

SM: Really? Smarter than Aristotle? 

T: That’s an easy one. People like Aristotle spent a lot of time, all too much time, with politics, not enough with thinking. OK, let’s go back to your very first naive mathematical manipulation. You took the square of both sides. 

SM: Of course I did. 

Tyranosopher: You can’t do that.

SM: Of course I can.

Tyranosopher: No. In FINITE math, a = b does not imply that aa = bb

SM: Why?

T: Because aa could be meaningless. It could be too big to have meaning. It’s a added to itself a times. If, as we compute aa, we hit the greatest number, #, we must stay silent, as Wittgenstein would have said. 

In FINITE math, the infinite set of integers N does not exist. Only what can be finitely constructed exist. Because there is no way to construct the set N, as it would be infinite (if it existed; that’s a huge difference between what I propose, and what David Hilbert proposed). In my system, integers and rational numbers are constructed,  according to the principles I exposed in META, layer by layer, like an onion

SM: Wait. There are other proofs of the irrationality of square root of two.

T: Yes, but it’s always the same story: at some point, multiplication is involved, so my objection resurfaces.   

SM: OK, all right. Let me go philosophical. What’s the point of all this madness? Trying to look smarter because you are so vain, at the cost of looking mad? Do you realize that you are throwing out of the window much of modern mathematics?

T: Calm down. Entire parts of math are left untouched, such as topology, category theory, etc. My goal is to refocus all of math according to physics, and deny any worth to the areas that rest on nothing.

All too many mathematicians have engaged in a science as alluring as the counting of angels on a pinhead in the Middle-Ages. 

SM: Dedekind said: “God created the integers, and the rest was man’s creation.” 

T: Precisely, God does not exist, so nor does the infinite set of the integers, N. This will allow mathematicians to refocus on what they can do, and remember that there is a smallest scale, and it would, assuredly change the methods of proof, in many parts.

SM: Such as? 

T: Take the Navier Stokes fluid equation: one has to realize that, ultimately, the math have got to get grainy. This would help physics too, including all computations having to do with infinities. 

SM: You are asking for a mad jump into lala land.

T: We are already in lala land. Finding the correct definitions is even more important than finding the correct theorems (as the latter can’t exist without the former). The reigning axiomatic theory, ZFC (Zermelo Fraenkel Choice) requires an infinite number of axioms. What’s more reasonable? An infinite number of axioms, or my finite onion?

The answer is obvious. It’s a NON ARISTOTELIAN WORLD.

In my not so humble opinion, the consequences are far reaching.


Patrice Ayme


October 31, 2013

If we want to get real smart, we will have to let no reason unturned. Foundations of calculus have been debated for 23 centuries (from Archimedes to the 1960s’ Non Standard Analysis). I cut the Gordian knot in a way never seen before. Nietzsche claimed he “made philosophy with a hammer”, I prefer the sword. Watch me apply it to calculus.

I read in the recent (2013) MIT book “The Outer Limits Of Reason” published by a research mathematician that “all of calculus is based on the modern notions of infinity” (Yanofsky, p 66). That’s a widely held opinion among mathematicians.

Yet, this essay demonstrates that this opinion is silly.

Instead, calculus can be made, just as well, in finite mathematics.

This is not surprising: Fermat invented calculus around 1630 CE, while Cantor made a theory of infinity only 260 years later. That means calculus made sense without infinity. (Newton used this geometric calculus, which is reasonable… with any reasonable function; it’s rendered fully rigorous for all functions by what’s below… roll over Weierstrass… You all, people, were too smart by half!)

If one uses the notion of Greatest Number, all computations of calculus have to become finite (as there is only a finite number of numbers, hey!).

The switch to finitude changes much of mathematics, physics and philosophy. Yet, it has strictly no effect on computation with machines, which, de facto, already operate in a finite universe.

In the first part, generalities on calculus, for those who don’t know much; can be skipped by mathematicians. Second part: original contribution to calculus (using high school math!).



Calculus is a non trivial, but intuitive notion. It started in Antiquity by measuring fancy (but symmetric) volumes. This is what Archimedes was doing.

In the Middle Ages, it became more serious. Shortly after the roasting of Johanne’ d’Arc, southern French engineers invented field guns (this movable artillery, plus the annihilation of the long bow archers, is what turned the fortunes of the South against the London-Paris polity, and extended the so called “100 year war” by another 400 years). Computing trajectories became of the essence. Gunners could see that Buridan had been right, and Aristotle’s physics was wrong.

Calculus allowed to measure the trajectory of a canon ball from its initial speed and orientation (speed varies from speed varying air resistance, so it’s tricky). Another thing calculus could do was to measure the surface below a curve, and relate curve and surface. The point? Sometimes one is known, and not the other. Higher dimensional versions exist (then one relates with volumes).

Thanks to the philosopher and captain Descartes, inventor of algebraic geometry, all this could be put into algebraic expressions.

Example: the shape of a sphere is known (by its definition), calculus allows to compute its volume. Or one can compute where the maximum, or an inflection point of a curve is, etc.

Archimedes made the first computations for simple cases like the sphere, with slices. He sliced up the object he wanted, and approximated its shape by easy-to-compute slices, some bigger, some smaller than the object itself (now they are called Riemann sums, from the 19C mathematician, but they ought to be called after Archimedes, who truly invented them, 22 centuries earlier). As he let the thickness of the slices go to zero, Archimedes got the volume of the shape he wanted.

As the slices got thinner and thinner, there were more and more of them. From that came the idea that calculus NEEDED the infinite to work (and by a sort of infection, all of mathematics and logic was viewed as having to do with infinity). As I will show, that’s not true.

Calculus also allows to introduce differential equations, in which a process is computed from what drives its evolution.

Fermat demonstrated the fundamental theorem of calculus: the integral was the surface below a curve, differentiating that integral gives the curve back; otherwise said, differentiating and integrating are inverse operations of each other (up to constants).

Arrived then Newton and Leibnitz. Newton went on with the informal, intuitive Archimedes-Fermat approach, what one should call the GEOMETRIC CALCULUS. It’s clearly rigorous enough (the twisted examples one devised in the nineteenth century became an entire industry, and graduate students in math have to learn them. Fermat, Leibnitz and Newton, though, would have pretty much shrugged them off, by saying the spirit of calculus was violated by this hair splitting!)

Leibnitz tried to introduce “infinitesimals”. Bishop Berkeley was delighted to point out that these made no sense. It would take “Model Theory”, a discipline from mathematical logic, to make the “infinitesimals” logically consistent. However the top mathematician Alain Connes is spiteful of infinitesimals, stressing that nobody could point one out. However… I have the same objection for… irrational numbers. Point at pi for me, Alain… Well, you can’t. My point entirely, making your point irrelevant.



Yes, Alain Connes, infinitesimals cannot be pointed at. Actually, there are no points in the universe: so says Quantum physics. The Quantum says: all dynamics is waves, and waves point only vaguely.

However, Alain, I have the same objection with most numbers used in present day mathematics. (Actually  the set of numbers I believe exist has measure zero relative to the set of so called “real” numbers, which are anything but real… from my point of view!).

As I have explained in GREATEST NUMBER, the finite amount of energy at our disposal within our spacetime horizon reduces the number of symbols we can use to a finite number. Once we have used the last symbol, there is nothing anymore we can say. At some point, the equation N + 1 cannot be written. Let’s symbolize by # the largest number. Then 1/# is the smallest number. (Actually (# – 1)/# is the fraction with the largest components.)

Thus, there are only so many symbols one can actually use in the usual computation of a derivative (as computers know well).  Archimedes could have used only so many slices. (The whole infinity thing started with Zeno and his turtle, and the ever thinner slices of Archimedes; the Quantum changes the whole thing.)

Let’s go concrete: computing the derivative of x -> xx. it’s obtained by taking what the mathematician Cauchy, circa 1820, called the “limit” of the ratio: ((x + h) (x + h) – xx)/h. Geometrically this is the slope of the line through the point (x, xx) and (x + h, (x + h) (x + h)) of the x -> xx curve. That’s (2x + h). Then Cauchy said: “Let h tend to zero, in the limit h is zero, so we find 2x.”  In my case, h can only take a number of values, increasingly smaller, but they stop. So ultimately, the slope is 2x + 1/#. (Not, as Cauchy had it, 2x.)

Of course, the computer making the computation itself occupies some spacetime energy, and thus can never get to 1/# (as it monopolizes some of the matter used for the symbols). In other words, as far as any machine is concerned, 1/# = 0! In other words, 1/# is… infinitesimal.

This generalizes to all of calculus. Thus calculus is left intact by finitude.


Patrice Ayme


Note: Cauchy, a prolific and major mathematician, but also an upright fanatic Catholic, who refused to take an oath to the government, for decades, condemning his career, would have found natural to believe in infinity… the latter being the very definition of god.


October 8, 2013

I pursue my (energy motivated) program of turning all mathematics and logic, FINITE. I define the appropriate notion of META. Not just that, but I use the notion to make any logic into a chrono-logy. (A Chronology/Semantic Hierarchy evades the logical paradoxes.)

This is extremely advanced material, well beyond the edge of what’s commonly understood, using implicitly the implicated order from my sub-Quantum theory. Still most of the notions used below are easy to understand!


The notion of “META” is fundamental for the analysis of any system of thoughts or emotions. What’s going meta? I claim: Any theory has meta-theories associated to itself.

If one looks at the literature of meta, it’s a big mess. Recently it was encumbered by a sensation author obsessed by “strange loops” (Douglas Hofstadter, in books starting in 1979 with Gödel, Escher, Bach…)

Studying meta with “strange loops” is older than Aristotle (see the Cretan paradox below).

However the notion of meta I introduce here is much more general (although it contains the “strange loops” thingy, it also evades it, see below!)

To understand the essence of meta, one has to go back to bare-bone logic.

Given a language L, one can talk within that language L. However, what’s L made of? L = (LOG, TRUTH, U). “LOG” is the logic, U the Universe of objects the logic applies to. The logic consists in a set of assemblies that can be applied again and again to objects of U and make constructions. “TRUE” is a label applied to some Well Formed Formulas (WFF) within LOG. (Not all WFF are TRUE.)

Example: suppose LOG is the usual logic, and U consists only of the set made of 3 elements: eat, banana, good. Then ((eat, banana) –> good), a Well Formed Formula from LOG and U, could be the (one and only) TRUE formula (all WFFs are true in some purely formal sense).     

Metalogic and metamathematics, as usually understood, arose when Cantor showed that the Real Numbers were uncountable. Cantor was the metamathematician per excellence (he invented cardinal and ordinal theories). Cynics would say that’s why Cantor became crazy: he went a few “meta” too far.

Relatively simple modifications of (one of) Cantor’s proof(s), his diagonalization trick, led to the revelation that any logical system that contains the usual arithmetic is incomplete: statements can be made that are neither true nor false (which statements, that’s not clear; although Cantor’s Continuum Hypothesis is one of them…).

From my point of view, the problem with the most honorable, and usual, metalogic is that it uses infinity to go from logic to metalogic. I believe only in finite stuff. (Still the Cretan/Liar paradox, that started the field, 26 centuries ago, looks finite, although it truly is not really…)

However one can define meta easily in a finite (or not!) setting:

TRUE, (by definition the set of all true WFFs) is a subset of WFF, the set of all WFFs. (LOG2, TRUTH2, U2) is meta relative to (LOG1, TRUTH1, U1) if and only if each of three sets of the latter is a subset of the corresponding set of the former, one of them strictly (say TRUTH 2 includes TRUTH1, or U2 includes U1).

So meta carries as a useful concept in the finite realm, and has nothing to do with confusing causal loops.

How is the 26 centuries old Liar paradox solved in this scheme? That’s the paradox presented by the statement:

“This statement is false.”

Well, that deserves its own essay. Let’s just say I was chuckling all the way about how clever I was, until I discovered that my first solution was exactly the one found by Buridan seven centuries ago, and the second one, using my theory of meta above, resulting in a semantic hierarchy, was somewhat similar in spirit to that of Alfred Tarski.

Buridan’s solution is excellent (he notices that “This statement is false” is equivalent to A and non A, so is obviously false). However this is too ad hoc. One needs to handle contradictions where the implication chain is longer (A –> B –> Non A). Thus:

My hierarchy idea is to build the Language L by layers, like an onion, starting with a core (L, TRUE, U). One assumes that the initial TRUE of WFFs is non contradictory. Call that SEMANTIC (0). And then one grows TRUE by using L and U, one implication (or operation) of L at a time. Operating L once on TRUE, one gets TRUE (1). Either TRUE (1) has a self contradiction, or not. If it does, stop: (L, TRUE, U) admits no META. If it does not, call it SEMANTIC (1), and proceed to (L, TRUE(2), U). And so on. The iteration operation gives a notion of time (like a clock in a computer). L(n + 1) is richer than L(n), etc.

Thus META allows to build a hierarchy of logics, and semantics. To say that a theory is “meta” relative to another can be rigorously defined.

Progress in understanding is always achieved by climbing up the Semantic Hierarchy of meta.


Patrice Ayme