Archive for the ‘Foundation mathematics’ Category

HOW MATHEMATICS EMPOWERS Souls With Wiser, More Powerful Abstractions: CONCEPTUAL DIMENSION THEORY

February 8, 2020

MATH IS A LANGUAGE WHOSE WORDS ARE NOT JUST THOUGHTS MADE OF SETS OF OBSERVATIONS, BUT COMPLICATED UNOBVIOUS LOGICAL SYSTEMS, Endowed With High Dimensions:

Abstract: What’s Math? And why does it matter?[1] Mathematics uses words denoting high dimensional concepts (defined subsequently). Those dimensions are the vertices of sophisticated logical systems. Logic itself is physics (nature), as basic as it goes. Thus mathematics is a maximally logically concentrated language which speaks of, and with, various conclusions humanity has drawn from the universe (that’s what “abstract” means: drawn away from!) Hence mathematics’ beauty, even poetry, let alone intelligence, from its enormous logical power.

Warning: Some of this essay is very basic, some on the forward edge of human understanding and will be controversial. Readers should jump harder sections. 

***

Mathematical concepts are hyper powerful because they are neurologically multidimensional and those dimensions are logically equivalent.

Mathematical concepts are hyper powerful because they are neurologically multidimensional and those dimensions are logically equivalent.

The power of mathematics comes from its power to abstract entire trains of thought, and more. This way is not unique to mathematics. Normal language works the same way. But mathematics is just much more powerful. As I will try to explain, the words of mathematics are much higher dimensional. 

If we say “red” (in any human language), we mean electromagnetic radiation within a more or less well defined wavelength range (which can be measured in fraction of a meter, or multiple of an atom). It doesn’t matter in which human language “red” is said: it’s always the same idea: a range of frequencies.[2]

A prehistoric man may have measured “red” as the wavelength of light emitted by blood, or bauxite, or iron oxide. Not exactly the same connotation, but the same general idea: a range of electromagnetic wavelengths. 

“Red” is a concept. So is a “parabola”: a concept too. But the second one is tied in, and it is, a much more complicated logic, with many aspects.

A parabola represents some sort of fixed equidistance, between one point, and a line. A hyperbola, some sort of fixed difference of the distances to two points. Two different subtle notions about distance. The two concepts are in turn full of corollaries and theorems: other unexpected at first sight consequences. Ellipses are the set of points whose sum of the distances to two points are fixed. Turns out that this is the trajectory of an object submitted to inertia counterbalanced by a force proportional to the inverse square of the distance to a central point

However a “parabola” is not just one concept, but many concepts, logics, so-called “theorems”. When you kick a soccer ball (or shoot an arrow, fire a missile or throw a stone), on a planet without atmosphere, it arcs up and comes down again, following a parabola (on a planet with atmosphere, the parabola shrivels a bit into a more complicated curve which can also be computed). A parabola is the set of point equidistant (same-distance) from a fixed line (the directrix) and a point (the focus).

A parabola has this profitable property: Any ray parallel to the axis of symmetry gets reflected off the surface straight to the focus.One can see the interest if one wants to concentrate (say) solar power, or conversely, have a focus of heat send back a beam of parallel heat… or parallel light, as in a lamp. if we slice through a cone, parallel to its side, we also get a parabola. The Ancients knew this. Menaechmus in the 4th century BC discovered a way to solve the problem of doubling the cube using parabolas (not just with compass and straight lines).

With such useful properties, parabolas are all over mathematics and physics, engineering and technology. A celestial body on a parabolic trajectory probably came from outside the solar system (and certainly so if it’s hyperbolic, the next conic section over…) Hence, when mathematicians, physicist, engineers brandish the word “parabola”, they actually brandish lots of elaborated logic, enough to fill up an entire book from senior high school mathematics. We are far here from a simple range of frequencies. So “parabola” is an abbreviation of thoughts.

***

Patrice’s DIMENSIONAL POWER OF CONCEPT THEORY:

The dimension of a mathematical concept shall be equal to the number of different neurological networks its various definitions, non obviously equivalent, but mathematically equivalent, call upon

One could object to this definition that it is subjective, that, if we were much more clever, the different definitions of a given mathematical concept would be glaringly obvious, etc. However, we have reached a level of intelligence that is enough to conquer the galaxy (if we don’t self-destruct, a big if, it’s only a question of time). So we have here a particular level of intelligence which is absolutely defined (roughly).

To further dig into the  notion of “subjectivity”: the notion of “mathematically equivalent” is different from “logically equivalent”: mathematics is, partly, a social concept. For example, mathematicians did excellent infinitesimal calculus, getting great results using Descartes Algebraic Geometry, for two centuries without a rigorous definition of “calculus” (and now we have too many notions!) This is no accident, but caused by the “neural networks” definition of mathematics. When we say that mathematical concepts are made of logical assemblies of neural networks, we are also alluding to the saying that the truth is in the pudding. This was practiced before, but not explicitly said, causing confusion. Something was clearly missing. What is mathematics? I say neural networks. Before this, the best authorities on the subject had nothing very deep to say on the subject. An example is Bertrand Russel, an authority in the Foundations of Mathematics (he found a glaring problem in the foundations of Set Theory and replaced it by the Theory of Types… launching an industry of foundations of mathematics…

As Bertrand Russell put it… well before neural networks, but I long meditated that quote, bringing me where I am:

As this essay shows, and I have long held, this quote expresses a thought which, unsurprisingly, turns out to be untrue. Why? Because it excludes the neural network definition of mathematics… which I embrace (as I created it!) it’s unsurprising, because as Russell would have been the first to admit, mathematics works, thus is, he would readily admit, true. Somehow. I show how.

Here is Bertrand more fully quoted: “Pure mathematics consists entirely of assertions to the effect that, if such and such a proposition is true of anything, then such and such another proposition is true of that thing. It is essential not to discuss whether the first proposition is really true, and not to mention what the anything is, of which it is supposed to be true. […] Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. People who have been puzzled by the beginnings of mathematics will, I hope, find comfort in this definition, and will probably agree that it is accurate.”

Explanation in a more modern language which Russell, living a century ago, couldn’t have the notion of. Neural networks don’t have to prove they are true, because, as soon as they exist, they are. Mathematics is all about neural networks, proving their equivalences, or building more with them (hence the success of category theory). Hence Russell was wrong: mathematics contains absolute truths, the truths of the neural networks which depict them. 

Bertrand Russell was on the trail which led where yours truly got.

Anyway the point here is to demonstrate, first of all, the role of mathematics in human intelligence, and how it relates to the universe.

That sort of dimensional approach can be extended to other concepts, for example love (sexual, parental, romantic, etc.; love is obviously in some sense very high dimensional… but not in the mathematical sense, because there are no rigorous proofs of the logical equivalence of the various notions of love (said logical equivalences making their own networks)… for the good and simple reason that they are often illusory or false, as they call upon different neurohormonal systems)

Each word is a theory. In normal language, as in mathematics. Neurologically, each word is a network. The concept of elephant is well-known to be made of various attributes, as described by blind men: a tail, tusks, legs like tree trunks, belly like a cave, ear like giant leaves, etc. And it eats trees, doesn’t forget, and can be tamed. So the concept of an elephant is a network.

A mathematical object or concept would often be similar, with various, widely different aspects… but they can be demonstrated to be all equivalent, modulo lots of logic. Math concepts are like the concept of elephant, with various aspects, but logically tied together: where the tail implies the tusks and the trunk, and the ears and the big feet. The number of these neurologically different aspects of one mathematical concept I call the conceptual dimension of that concept

Let me go on with my little example. “Red” is, literally, a one dimensional concept: a color is more or less red, as the frequency varies along the spectrum. Now a dimension of a function is simply described: a function, or a space, of n arguments, or n coordinates, is n dimensional. So how does the brain work? It has inputs and outputs. Inputs are known as senses. The senses are actually made of dedicated processing organs. For example the “visual area” has 17 or so processing sub-organs. Then end result, though, is that “Red” is PERCEIVED AS ONE INPUT. So we will call it ONE DIMENSIONAL. For that reason alone? Not quite electromagnetism literally demonstrates “red” is indeed a range of frequencies, it’s one dimensional in its fundamental input. 

A “Parabola” is high dimensional. Why? It is simple, a parabola has different definitions.  “Different” means that they look nothing like each other. They can be proven to be all equivalent, through a lot of mathematics and other keen observations. However, those equivalences are not obvious. Parabolas were known to have wonderful properties… for twenty centuries… before it was discovered that they described the trajectory of a projectile submitted to gravity. 

By making what he called his “War on Mars”, Kepler was able to prove that Mars followed an ellipse. However, it took another 70 years or so before newton published a more or less finished proof that Kepler’s Three Laws of planetology (including the ellipse) were equivalent to inertia plus the inverse square of the distance law. This is Newton’s greatest claim to fame (and many astronomers and mathematicians in Paris, from which came the gravitation law, would have liked to prove that… so it was not easy to do so). The bottom line is that here we have here two completely mathematically equivalent definitions and one can go from one to other, only through enormously hard work. Another definition of an ellipse, equivalent through more hard work, and that one known for 24 centuries is that it’s a particular section of a cone. 

So “ellipse”, like parabola, is a concept that is at least three dimensional: it is the equivalence of three completely distinct neural networks.   

Much mathematics consists in proving that completely different notions and approaches (different neural networks) are equivalent. For example, in differential geometry, the famous Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions of some operators on the manifold) is equal to the topological index (defined in terms of some topological data/network). That equivalence in turns includes many other theorems, as special cases, and has applications to theoretical physics.

***

Is mathematics the language of the universe? No. Universe don’t talk, just is. Mathematics is the smartest language of Homo Sapiens, talking about the universe in the most abstracted, thus most powerful, fashion!

Traditionally, it is said that Galileo discovered that, without air, a body would follow a parabola (artillery men had long discovered something like that was true). Galileo said: “Philosophy is written in that great book which ever lies before our eyes — I mean the universe — but we cannot understand it if we do not first learn the language and grasp the symbols, in which it is written. This book is written in the mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.”  

And so it goes, all over mathematics. The exponential is an arsenal of theorems. The square root of (-1) even more so. To understand the square root of negative numbers means to understand the complex numbers, the “largest” field (both of the latter word are themselves mathematical concepts, that is, sets of most significant theorems).  

The word “red” is already a broad abstraction of a vast field of possibilities. But the exponential or the complex numbers, or any mathematical concept can symbolize entire logical systems. Exp and the complex numbers are actually connected by the famous equation: exp (ix) = cos x + i sinx… Where i is the square root of minus one. So, in particular, exp(i) = -1…

Introducing basic, crucial mathematics to the uncouth multitudes is necessary, as Plato himself proclaimed at the entrance of his Academy… Said multitudes absolutely need more intuitive grasp of mathematics to become cogent enough about the world to help sheperd our great leaders toward enough sanity to ensure survival of the species. Nice perspective on parabolas, and what the different coefficients thereof mean. 

Not the easiest method to solve the quadratic equation, of course, as changing variables by taking X= (x+ b/2) as new variable is algebraically irresistible and solves the equation in 4 lines or so. 

Parabolas, and ellipses (both conic sections) were central to Seventeenth Century physics.

However, in the Nineteenth century waves, rose to prominence, first with light as wave, Fourier analysis (decomposing periodic motions into sum of cosines/sines), electromagnetism. it turns out (plenty of theorems) that all these come from the exponential!

Without a thorough grasp of exponentials, phenomena such as the CO2 catastrophe, or pandemics, can only escape the understanding of the commons or god-struck politicians. Exponentials grow at an instantaneous speed equal to their instantaneous value… exactly as a bacterial colony. Most catastrophes involve exponentials. Exponentials also illustrate all sorts of decays and, glued together, the most frequent probability distributions. 

***

Math beauty, the beauty of neural networks. Neural networks give us power, and we find that beautiful…

HIGH POWER CONCEPTS HAVE HIGH DIMENSION:

All this goes meta. Example: the concept of “Coronavirus” (“Crown Shaped Virus”). Antivirals against some type of Coronaviruses act against others (Remdesivir). So what is logically connected can be collectively treated. This is why broad concepts feed intelligence, thus action power.

By this I mean (rough) equivalences of foundations themselves form high dimensional conceptual objects: Category Theory is, by itself, such an object.

Another, more practical example: Infinitesimal Calculus. Infinitesimal Calculus has many different definitions, more or less equivalent, the earliest dating back to Archimedes, and then another one, which I call the Infinitesimal Geometric Calculus developed the Buridan school in the Fourteenth Century (this is the one Newton used). The more recent definitions of infinitesimals (Robinson and Al.) are from the Twenty-first Century (2006 Karel Hrbacek). This means the field is still fully active research! More dimensions to be added!

This makes Infinitesimal Calculus, according to my definition, a very high dimensional object. Refined, high dimensional thinking was of course hated by the terroristic, mentally simplistic Roman Catholic Church. Accordingly, Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632! (Notice that this was long before the birth of Leibniz or Newton, to whom the creation of calculus is often erroneously attributed by Anglo-German tribalists…)

Mathematics is the language whose words are ready made sets of powerful thoughts (for example word-concepts such a “parabola”, or the “exponential” come with an arsenal of thoughts and inner logic). 

By learning to speak and think math, we learn a metalanguage, the most powerful language humanity has written, and keeps writing, whose elements belong to, and depict, the world. Mathematica and, even more, logics are the skeletons of physics, and the latter is how the world is made. To have more advanced thoughts on what the world is made of, they are not just the eyes, but the senses one can’t do without.  

One could call mathematics the Post-Prehistoric Language. [3]

In any case, mathematics is the surest, inescapable way to more powerful thinking. [4] Even the lousiest pseudo-philosophers nowadays know some more important mathematics than Archimedes itself (a truly horrendously offensive thought!)  The more advanced thinking they got imprinted with in primary school, much of it mathematical, helps to explain why even the lousiest official thinkers nowadays are smarter than the Ancients.

When communicating mathematics, one communicates with entire, high dimensional logical systems.[5] Thus the language is hyper powerful: it has huge logical bandwidth.

Patrice Ayme

***

***

[1] Plato famously interdicted access to his Academy to all non-mathematicians. The essay above explains why. Top philosophy can’t indulge mental retards too much, out of the lab, to study them. Mastery of contemporary math insures some minimum standard of intellectual capability.

By the way, my neurological network definition of mathematics shows that the Platonic world of math, out there was… all along inside Plato’s head. Or the heads of all mathematicians (including those in kindergarten…) 

***

[2] Range of frequencies is of course the post-Maxwell description/explanation… Now prehistoric man would have shrugged that he knew red when he saw it in sunsets, blood, bauxite, flowers… That comes down to the same excitement of the brain in the same way each time, a particular pattern: there is no logic to it.

***

[3] “Postmodernism” means, of course, nothing. because when was “modernism”? When William The Conqueror suggested that the Earth turned around the Sun, before freeing all the slaves of England while his friend the Abbot Berengar was suggesting that Reason was what was meant by God (to the impotent fury of the Vatican)? That was during the Eleventh Century… Whereas, “Prehistory”, defined as what was before the Neolithic (because the Neolithic is entering history, thanks to lots of archeology) is certainly a well-defined notion. Prehistoric men knew concepts such as red, as in bloody sunsets, very well. But they had little notion of parabolas… except of course, in practice, when they threw a projectile onto a prey or predator…

***

[4] Learning math doesn’t guarantee wisdom, especially not anti-fascist wisdom, to wit, Plato. The deplorable “modern” case being Kant. Kant started as an astronomer, a co-discoverer of the concept of galaxy. He should have stuck to that, instead of helping turn hundreds of millions of germans (over a few generations) into moralizing murder robots.

Many people are full of hatred, and they don’t even suspect it. Worse: the Zeitgeist, the spirit of the times, is to pretend that there is such a thing as good, moralizing people, bereft of hatred. A contradiction in adjecto

Philosophically, of course Kant was mostly an enslaving pre-Nazi robot as his most important characteristic, proving mathematics produces plenty of idiot savants. Nietzsche, an excellent philosopher, was no mathematician, but a philologist (a lover of logic, of the interpretation of the meaning of texts; recently the term hermeneutics is preferred because it sounds more savant)

Descartes, of course was one the greatest minds and a very astute psychologist… and used psychology to further math… by forcing math in more useful logic… something I also advocate in my stance relative to infinity! A lot of top scientists were top philosopher, having to invent new philosophy to invent new physics (Maxwell’s identification of electromagnetism and light, Boltzmann’s murky states and Poincare’s local space and time being obvious examples) And of course the Foundations of Quantum Physics are a philosophical abyss questioning time, space, and reality itself into an uncertain, not to say ethereal, medium…

***

[5] The dimension of a logical system is the minimal number of axioms in its axiomatics. Don’t look it up: I invented the notion. It boils down to the usual definition of dimension in a manifold (by subtracting, axioms in common).

What Are Numbers? Math is most abstracted physics!

June 27, 2019

German mathematician Richard Dedekind (1831–1916) published in 1888 a paper entitled Was sind und was sollen die Zahlen? What are numbers and what should they be? 

Here is my answer: forget what you know. 

Numbers are neural networks. Small numbers have small networks, big ones, big networks; so the nature of numbers, change, as they get bigger…(According to me, listening, delighted, to the indignant screams of distant mathematicians ).

Diagram Chasing all of them: not a coincidence. Instead of having “It from Bit”, one has it from action (arrow in Cat theory, action potential with neurons, fundamental process in physics…)

A few immediate applications of this master idea:

  1. Numbers are learned, because neural networks are learned.
  2. Advanced animals, having advanced neural networks, should be capable of having those neural networks we call numbers.
  3. Big numbers are different from small numbers, because big neural networks are different from small ones. Here again is the idea that energy should matter in mathematics (the conventional thinking being just the opposite: energy doesn’t matter).

***

Kronecker’s also quipped: “God made the natural numbers. Everything else is the work of man.

Kronecker proceeded to define numbers from Set Theory, invented for the purpose. Later Bertrand Russell found a problem with Set Theory, the set of sets which are not elements of themselves brought a contradiction. Russell tried to get out of that with a hyper complicated theory. In modern times, mathematicians prefer to use Category Theory. [1]

I go beast on how to construct numbers. Beasts have brains, and brains have neural networks.

Kronecker thought mathematics is the work of man. But, actually all advanced animals move in a way proving they are capable of differential calculus. Far from being the work of god, differential calculus is the “work” of dog. Without differential calculus, that dog can’t hunt. OK, dog is not conscious of god, or of the calculus it’s using. So what?   

Now for a few easy bits:

*** 

Let’s notice that numbers are definitely the work of the genus Homo: 

Consider the integer 152. 152 is the work of man. Just like “Yes” is the work of the Englishman. 

152 means: 100 + 5×10 + 2. But that’s only in base ten. In base 60, that would be: 60 x 60 + 5 x 60 + 2… Which converts to 3,902  back in base ten. 

So “152” is not an absolute notion. For that integer to make sense, the basis in which it lives has to be expressed (and what the notation means, such as 2 = 1+1…). The Babylonians invented base 60 to handle big numbers in astronomy. We still use base 60 to this day, for angles and time. So “152” is a cultural construction. In several ways. 

***

So how come Platonists claim that numbers live out there, in a special realm of their own, if there is so much human explanation and convention to provide, with just basic numbers? Most mathematicians also believe their are exploring that realm of Plato. But actually all they are exploring is the possible connections which can be built within the neural networks inside their brains. So they are exploring physics, a bit like a child on a beach explores which sand castle she can get away with. A difference with building sand castles is that the possibilities are few and are carefully recorded, becoming the body of that culture and language called “mathematics”. 

An example is the Archimedean axiom. The Greeks knew about it well: it’s in Euclid, and it says that, given two magnitudes, A and B, there is always an integer n so that: nA > B.

If one denies that axiom, one gets infinitesimals… That was made rigorous through Model Theory, in the 1950s, three centuries after Leibnitz first introduced infinitesimals, starting a fight with Newton.

No Plato universe of “forms”… or rather, they exist, but live as geometries inside brains…

Even more dramatic are hyperbolic and elliptic geometries: they were discovered at least a century before Euclid. Then they were forgotten, and a stupid debate occurred for 21 centuries about whether the parallel axiom (one parallel to a line, one only, through a point off the line) was independent of the others. Mathematicians, even the brightest, had forgotten that their ancestors had found geometries with many, or no, parallels…)

***

Let’s recapitulate: culture is composed of (vague, but good enough) descriptions of neural networks, which can be transmitted. Once contracted, those neural network templates modify brains in similar ways. Those similarly modified brains behave all similarly, mimicking innate characteristics.  

Language enables a transmission of neural geometries, topologies, logics, and categories. Language is primitive in most advanced animals, consisting in grunts, cooing, gestures, etc. But in Homo language became an advanced mental cultural duplication system (and some of the mentality passed is mathematical, but not only). 

True, advanced animals have a sort of pseudo-innate capability to evolve neurobiological mathematical structures: through trial and errors mimicking their relatives, or experimentation with what works, young animals brains learn to optimize trajectories: the brains of many predators in pursuit make subsets of themselves into differential calculus machines. 

So if Plato’s “forms” are real forms in (generalized) geometry and topology… what are the latter made of? Good question! Therein come our old friend, the Quantum Wave… 

Clearly, math is the most abstracted physics.

Patrice Ayme

DOING AWAY WITH INFINITY SOLVES MUCH MATH & PHYSICS

January 11, 2018

Particle physics: Fundamental physics is frustrating physicists: No GUTs, no glory, intones the Economist, January 11, 2018. Is this partly caused by fundamental flaws in logic? That’s what I long suggested.

Says The Economist:“Persistence in the face of adversity is a virtue… physicists have been nothing if not persistent. Yet it is an uncomfortable fact that the relentless pursuit of ever bigger and better experiments in their field is driven as much by belief as by evidence. The core of this belief is that Nature’s rules should be mathematically elegant. So far, they have been, so it is not a belief without foundation. But the conviction that the truth must be mathematically elegant can easily lead to a false obverse: that what is mathematically elegant must be true. Hence the unwillingness to give up on GUTs and supersymmetry.”

Mathematical elegance? Define mathematics, define elegance. What is mathematics already? What maybe at fault is the logic, that is the mathematics, brought to bear in present day theoretical physics. And I will say even more: all of today logic may be at fault (what logic is, is itself the deepest problem in logic…). It’s not just physics which should tremble. The Economist gives a good description of the developing situation, arguably the greatest standstill in physics in four centuries:

“In the dark

GUTs are among several long-established theories that remain stubbornly unsupported by the big, costly experiments testing them. Supersymmetry, which posits that all known fundamental particles have a heavier supersymmetric partner, called a sparticle, is another creature of the seventies that remains in limbo. ADD, a relative newcomer (it is barely 20 years old), proposes the existence of extra dimensions beyond the familiar four: the three of space and the one of time. These other dimensions, if they exist, remain hidden from those searching for them.

Finally, theories that touch on the composition of dark matter (of which supersymmetry is one, but not the only one) have also suffered blows in the past few years. The existence of this mysterious stuff, which is thought to make up almost 85% of the matter in the universe, can be inferred from its gravitational effects on the motion of galaxies. Yet no experiment has glimpsed any of the menagerie of hypothetical particles physicists have speculated might compose it.

Despite the dearth of data, the answers that all these theories offer to some of the most vexing questions in physics are so elegant that they populate postgraduate textbooks. As Peter Woit of Columbia University observes, “Over time, these ideas became institutionalised. People stopped thinking of them as speculative.” That is understandable, for they appear to have great explanatory power.

A lot of the theories found in theoretical physics “go to infinity”, and a lot of their properties depend upon infinity computations (for example “renormalization”). Also a lot of problems which appear and that, say, “supersymmetry” tries to “solve”, have to do with turning around infinite computations which go mad for all to see. For example, plethora of virtual particles make Quantum Field Theory… and miss reality by a factor of 10^120 (one followed by 120 zeroes…). Thus curiously, Quantum Field Theory is both the most precise, and most false theory ever devised. Confronted to all this, physicists have tried to do what has NOT worked in the past, sometimes for centuries, like finding the intellectual keys below the same lighted lamp post, and counting the same angels on the same pinhead.

A radical way out presents itself to simplify the situation. It is itself very simple. And it is global, clearing out much of logic, mathematics and physics, of a dreadful madness which has seized those fields: GETTING RID OF INFINITY… at the logical level. Observe that infinity itself is not just a mathematical hypothesis, it is a mathematically impossible hypothesis: infinity is not an object. Infinity has been used as a device (for computations in mathematics). But what if that device is not an object, is not constructible?

Then lots of the problems theoretical physics try to solve, a lot of these “infinities“, simply disappear. 

Colliding Galaxies In the X Ray Spectrum (Spitzer Telescope, NASA). Very very very big is not infinity! We have no time for infinity!

A conventional way to get rid of infinities in physics is to cancel particles with particles: “as a Higgs boson moves through space, it encounters “virtual” versions of Standard Model particles (like photons and electrons) that are constantly popping in and out of existence. According to the Standard Model, these interactions drive the mass of the Higgs up to improbable values. In supersymmetry, however, they are cancelled out by interactions with their sparticle equivalents.” Having a finite cut-off would do the same.

A related logic creates the difficulty with Dark Matter, in my opinion. Here is why. Usual Quantum Mechanics assumes the existence of infinity in the basic formalism of Quantum Mechanics. This brings the non-prediction of Dark Matter. Some physicists will scoff: infinity? In Quantum Mechanics? However, the Hilbert spaces which Quantum Mechanical formalism uses are often infinite in extent. Crucial to Quantum Mechanics formalism, but still extraneous to it, festers an ubiquitous instantaneous collapse (semantically partly erased as “decoherence” nowadays). “Instantaneous” is the inverse of “infinity” (in perverse infinity logic). If the later has got to go, so does the former. As it is Quantum Mechanics depends upon infinity. Removing the latter requires us to change the former.

Laplace did exactly this with gravity around 1800 CE. Laplace removed the infinity in gravitation, which had aggravated Isaac Newton, a century earlier. Laplace made gravity into a field theory, with gravitation propagating at finite speed, and thus predicted gravitational waves (relativized by Poincaré in 1905).

Thus, doing away with infinity makes GUTS’ logic faulty, and predicts Dark Matter, and even Dark Energy, in one blow.

If one new hypothesis puts in a new light, and explains, much of physics in one blow, it has got to be right.

Besides doing away with infinity would clean out a lot of hopelessly all-too-sophisticated mathematics, which shouldn’t even exist, IMHO. By the way, computers don’t use infinity (as I said, infinity can’t be defined, let alone constructed).

Sometimes one has to let go of the past, drastically. Theories of infinity should go the way of those crystal balls theories which were supposed to explain the universe: silly stuff, collective madness.

Patrice Aymé

Notes: What do I mean by infinity not constructible? There are two approaches to mathematics:1) counting on one’s digits, out of which comes all of arithmetics. If one counts on one’s digits, one runs of digits after a while, as any computer knows, and I have made into a global objection, by observing that, de facto, there is a largest number (contrarily to what fake, yet time-honored, 25 centuries old “proofs” pretend to demonstrate; basically the “proof” assumes what it pretends to demonstrate, by claiming that, once one has “N”, there is always “N + 1”).

2) Set theory. Set theory is about sets. An example of “set” could be the set of all atoms in the universe. That may, or may not, be “infinite”. In any case, it is not “constructible”, not even to be extended consideration, precisely because it is so considerable (conventional Special Relativity, let alone basic practicality prevent that; Axiomatic Set Theory a la Bertrand Russell has tried to turn around infinity with the notion of  a proper class…)

In both 1) and 2), infinite can’t be considered, precisely, because it doesn’t finish.

Some will scoff, that I am going back to Zeno’s paradox, being baffled by what baffled Zeno. But I know Zeno, he is a friend of mine. My own theory explains Zeno’s paradox. And, in any case, so does Cauchy’s theory of limits (which depends upon infinity only superficially; even infinitesimal theory, aka non-standard analysis, from Leibnitz + Model Theory survives my scathing refounding of all of logics, math, physics).  

By the way, this is all so true that mathematicians have developed still another notion, which makes, de facto, logic local, and spurn infinity, namely Category Theory. Category Theory is very practical, but also an implicit admission that mathematicians don’t need infinity to make mathematics. Category Theory has now become fashionable in some corners of theoretical physics.

3) The famous mathematician Brouwer threw out some of the famous mathematical results he had himself established, on grounds somewhat similar to those evoked above, when he promoted “Intuitionism”. The latter field was started by Émile Borel and Henri Lebesgue (of the Lebesgue integral), two important French analysts, viewed as  semi-intuitionists. They elaborated a constructive treatment of the continuum (the real line, R), leading to the definition of the Borel hierarchy. For Borel and Lebesgue considering the set of all sets of real numbers is meaningless, and therefore has to be replaced by a hierarchy of subsets that do have a clear description. My own position is much more radical, and can be described as ultra-finitism: it does away even with so-called “potential infinity” (this is how I get rid of many infinities in physics, which truly are artefacts from mathematical infinity).  I expect no sympathy: thousands of mathematicians live off infinity.

4) Let me help those who want to cling to infinity. I would propose two sort of mathematical problems: 1) those who can be solved when considered in Ultra Finite mathematics  (“UF”). 2) Those which stay hard, not yet solved, even in UF mathematics.

The Quantum Puzzle

April 26, 2016

CAN PHYSICS COMPUTE?

Is Quantum Computing Beyond Physics?

More exactly, do we know, can we know, enough physics for (full) quantum computing?

I have long suggested that the answer to this question was negative, and smirked at physicists sitting billions of universes on a pinhead, as if they had nothing better to do, the children they are. (Just as their Christian predecessors in the Middle Ages, their motives are not pure.)

Now an article in the American Mathematical Society Journal of May 2016 repeats (some) of the arguments I had in mind: The Quantum Computer Puzzle. Here are some of the arguments. One often hears that Quantum Computers are a done deal. Here is the explanation from Justin Trudeau, Canada’s Prime Minister, which reflects perfectly the official scientific conventional wisdom on the subject:  https://youtu.be/rRmv4uD2RQ4

(One wishes all our great leaders would be as knowledgeable… And I am not joking as I write this! Trudeau did engineering and ecological studies.)

... Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits...

… Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits…

Before some object that physicists are better qualified than mathematicians to talk about the Quantum, let me point towards someone who is perhaps the most qualified experimentalist in the world on the foundations of Quantum Physics. Serge Haroche is a French physicist who got the Nobel Prize for figuring out how to count photons without seeing them. It’s the most delicate Quantum Non-Demolition (QND) method I have heard of. It involved making the world’s most perfect mirrors. The punch line? Serge Haroche does not believe Quantum Computers are feasible. However Haroche does not suggest how he got there. The article in the AMS does make plenty of suggestions to that effect.

Let me hasten to add some form of Quantum Computing (or Quantum Simulation) called “annealing” is obviously feasible. D Wave, a Canadian company is selling such devices. In my view, Quantum Annealing is just the two slit experiment written large. Thus the counter-argument can be made that conventional computers can simulate annealing (and that has been the argument against D Wave’s machines).

Full Quantum Computing (also called  “Quantum Supremacy”) would be something completely different. Gil Kalai, a famous mathematician, and a specialist of Quantum Computing, is skeptical:

“Quantum computers are hypothetical devices, based on quantum physics, which would enable us to perform certain computations hundreds of orders of magnitude faster than digital computers. This feature is coined “quantum supremacy”, and one aspect or another of such quantum computational supremacy might be seen by experiments in the near future: by implementing quantum error-correction or by systems of noninteracting bosons or by exotic new phases of matter called anyons or by quantum annealing, or in various other ways…

A main reason for concern regarding the feasibility of quantum computers is that quantum systems are inherently noisy. We will describe an optimistic hypothesis regarding quantum noise that will allow quantum computing and a pessimistic hypothesis that won’t.”

Gil Katai rolls out a couple of theorems which suggest that Quantum Computing is very sensitive to noise (those are similar to finding out which slit a photon went through). Moreover, he uses a philosophical argument against Quantum Computing:

It is often claimed that quantum computers can perform certain computations that even a classical computer of the size of the entire universe cannot perform! Indeed it is useful to examine not only things that were previously impossible and that are now made possible by a new technology but also the improvement in terms of orders of magnitude for tasks that could have been achieved by the old technology.

Quantum computers represent enormous, unprecedented order-of-magnitude improvement of controlled physical phenomena as well as of algorithms. Nuclear weapons represent an improvement of 6–7 orders of magnitude over conventional ordnance: the first atomic bomb was a million times stronger than the most powerful (single) conventional bomb at the time. The telegraph could deliver a transatlantic message in a few seconds compared to the previous three-month period. This represents an (immense) improvement of 4–5 orders of magnitude. Memory and speed of computers were improved by 10–12 orders of magnitude over several decades. Breakthrough algorithms at the time of their discovery also represented practical improvements of no more than a few orders of magnitude. Yet implementing Boson Sampling with a hundred bosons represents more than a hundred orders of magnitude improvement compared to digital computers.

In other words, it unrealistic to expect such a, well, quantum jump…

“Boson Sampling” is a hypothetical, and simplest way, proposed to implement a Quantum Computer. (It is neither known if it could be made nor if it would be good enough for Quantum Computing[ yet it’s intensely studied nevertheless.)

***

Quantum Physics Is The Non-Local Engine Of Space, and Time Itself:

Here is Gil Kalai again:

“Locality, Space and Time

The decision between the optimistic and pessimistic hypotheses is, to a large extent, a question about modeling locality in quantum physics. Modeling natural quantum evolutions by quantum computers represents the important physical principle of “locality”: quantum interactions are limited to a few particles. The quantum circuit model enforces local rules on quantum evolutions and still allows the creation of very nonlocal quantum states.

This remains true for noisy quantum circuits under the optimistic hypothesis. The pessimistic hypothesis suggests that quantum supremacy is an artifact of incorrect modeling of locality. We expect modeling based on the pessimistic hypothesis, which relates the laws of the “noise” to the laws of the “signal”, to imply a strong form of locality for both. We can even propose that spacetime itself emerges from the absence of quantum fault tolerance. It is a familiar idea that since (noiseless) quantum systems are time reversible, time emerges from quantum noise (decoherence). However, also in the presence of noise, with quantum fault tolerance, every quantum evolution that can experimentally be created can be time-reversed, and, in fact, we can time-permute the sequence of unitary operators describing the evolution in an arbitrary way. It is therefore both quantum noise and the absence of quantum fault tolerance that enable an arrow of time.”

Just for future reference, let’s “note that with quantum computers one can emulate a quantum evolution on an arbitrary geometry. For example, a complicated quantum evolution representing the dynamics of a four-dimensional lattice model could be emulated on a one-dimensional chain of qubits.

This would be vastly different from today’s experimental quantum physics, and it is also in tension with insights from physics, where witnessing different geometries supporting the same physics is rare and important. Since a universal quantum computer allows the breaking of the connection between physics and geometry, it is noise and the absence of quantum fault tolerance that distinguish physical processes based on different geometries and enable geometry to emerge from the physics.”

***

I have proposed a theory which explains the preceding features, including the emergence of space. Let’s call it Sub Quantum Physics (SQP). The theory breaks a lot of sacred cows. Besides, it brings an obvious explanation for Dark Matter. If I am correct the Dark matter Puzzle is directly tied in with the Quantum Puzzle.

In any case, it is a delight to see in print part of what I have been severely criticized for saying for all too many decades… The gist of it all is that present day physics would be completely incomplete.

Patrice Ayme’

BEING FROM DOING: EFFECTIVE ONTOLOGY, Brain & Consciousness

December 29, 2015

Thesis: Quantum Waves themselves are what information is (partly) made of. Consciousness being Quantum, shows up as information. Reciprocally, information gets Quantum translated, and then builds the brain, then the mind, thus consciousness. So the brain is a machine debating with the Quantum. Let me explain a bit, while expounding on the way the theory of General Relativity of Ontological Effectiveness, “GROE”:

***

What is the relationship between the brain and consciousness? Some will point out we have to define our terms: what is the brain, what is consciousness? We can roll out an effective definition of the brain (it’s where most neurons are). But consciousness eludes definition.

Still, that does not mean we cannot say more. And, from saying more, we will define more.

Relationships between definitions, axioms, logic and knowledge are a matter of theory:

Take Euclid: he starts with points. What is a point? Euclid does not say, he does not know, he has to start somewhere. However where that where exactly is may be itself full of untoward consequences (in the 1960s, mathematicians working in Algebraic Geometry found points caused problems; they have caused problems in Set Theory too; vast efforts were directed at, and around points). Effectiveness defines. Consider this:

Effective Ontology: I Compute, Therefore That's What I Am

Effective Ontology: I Compute, Therefore That’s What I Am

Schematic of a nanoparticle network (about 200 nanometres in diameter). By applying electrical signals at the electrodes (yellow), and using artificial evolution, this disordered network can be configured into useful electronic circuits.

Read more at: http://phys.org/news/2015-09-electronic-circuits-artificial-evolution.html#jCp

All right, more on my General Relativity of Ontological Effectiveness:

Modern physics talks of the electron. What is it? Well, we don’t know, strictly speaking. But fuzzy thinking, we do have a theory of the electron, and it’s so precise, it can be put in equations. So it’s the theory of the electron which defines the electron. As the former could, and did vary, so did the latter (at some point physicist Wheeler and his student Feynman suggested the entire universe what peopled by just one electron going back and forth in time.

Hence the important notion: concepts are defined by EFFECTIVE THEORIES OF THEIR INTERACTION with other concepts (General Relativity of Ontological Effectiveness: GROE).

***

NATURALLY Occurring Patterns Of Matter Can Recognize Patterns, Make Logic:

Random assemblies of gold nanoparticles can perform sophisticated calculations. Thus Nature can start computing, all by itself. There is no need for the carefully arranged patterns of silicon.

Classical computers rely on ordered circuits where electric charges follow preprogrammed rules, but this strategy limits how efficient they can be. Plans have to be made, in advance, but the possibilities become vast in numbers at such a pace that the human brain is unable to envision all the possibilities. The alternative is to do as evolution itself creates intelligence: by a selection of the fittest. In this case, a selection of the fittest electronic circuits.

(Selection of the fittest was well-known to the Ancient Greeks, 25 centuries ago, 10 centuries before the Christian superstition. The Ancient Greeks, used artificial and natural selection explicitly to create new breeds of domestic animals. However, Anglo-Saxons prefer to name things after themselves, so they can feel they exist; thus selection of the fittest is known by Anglo-Saxons as “Darwinian”. Hence soon we will hear about “Darwinian electronics”, for sure!)

“The best microprocessors you can buy in a store now can do 10 to the power 11 (10^11; one hundred billions) operations per second and use a few hundred watts,” says Wilfred van der Wiel of the University of Twente in the Netherlands, a leader of the gold circuitry effort. “The human brain can do orders of magnitude more and uses only 10 to 20 watts.  That’s a huge gap in efficiency.”

To close the gap, one goes back to basics. The first electronic computers, in the 1940s, tried to mimic what were thought at the time to be brain operations. So the European Union and the USA are trying more of the same, to develop “brain-like” computers that do computations naturally without their innards having been specifically laid out for the purpose. For a few years, the candidate  material that can reliably perform real calculations has been found to be gold.

Van der Wiel and colleagues have observed that clumps of gold grains handle bits of information (=electric charge) in the same way that existing microprocessors do.

Clump of grains computing operate as a unit, in parallel, much as it seems neurons do in the brain. This should improve pattern recognition. A pattern, after all, is characterized by dimension higher than one, and so is a clump operating together. A mask to recognize a mask.

Patterns are everywhere, logics itself are patterns.

***

WE ARE WHAT WE DO:

So what am I saying, philosophically? I am proposing a (new) foundation for ontology which makes explicit what scientists and prehistoric men have been doing all along. 

The theory of the nature of being is ontology, the “Logic of Being”. Many philosophers, or pseudo-philosophers have wrapped themselves up in knots about what “Being”. (For example, Heidegger, trained as a Catholic seminarian, who later blossomed as a fanatical professional Nazi, wrote a famous book called “Zein und Zeit”, Being and Time. Heidegger tries at some point to obscurely mumble feelings not far removed from some explicit notions in the present essay.)

Things are defined by what they do. And they do what they do in relation with other things.

Where does it stop? Well, it does not. What we have done is define being by effectiveness. This is what mathematicians have been doing all along. Defining things by how they work produce things, and theories, which work. The obvious example is mathematics: it maybe a castle in the sky, but this castle is bristling with guns, and its canon balls are exquisitely precise, thanks to the science of ballistics, a mathematical creation.

Things are what they do. Fundamental things do few things, sophisticated things do many things, and thus have many ways of being.

Some will say: ‘all right, you have presented an offering to the gods of wisdom, so now can we get back to the practical, such as the problems Europe faces?’

Be reassured, creatures of little faith: Effective Ontology is very practical. First of all, that’s what all of physics and mathematics, and actually all of science rest (and it defines them beyond Karl Popper’s feeble attempt).

Moreover, watch Europe. Some, including learned, yet nearly hysterical commenters who have graced this site, are desperately yelling to be spared from a “Federal Europe“, the dreaded “European Superstate“. The theory of Effective Ontology focuses on the essence of Europe. According to Effective Ontology, Europe is what it does.

And  what does Europe do? Treaties. A treaty, in Latin, is “foedus. Its genitive is foederis, and it gives foederatus, hence the French fédéral and from there, 150 years later in the USA, “federal”. Europe makes treaties (with the Swiss (Con)federation alone, the Europe Union has more than 600 treaties). Thus Europe IS a Federal State.

Effective Ontology has been the driver of Relativity, Quantum Physics, and Quantum Field Theory. And this is precisely why those theories have made so many uncomfortable.

Patrice Ayme’


NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Political Reactionary

Dark Enlightenment and Neoreaction

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

Sean Carroll

in truth, only atoms and the void

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Footnotes to Plato

because all (Western) philosophy consists of a series of footnotes to Plato

Patrice Ayme's Thoughts

Striving For Ever Better Thinking. Humanism Is Intelligence Unleashed. From Intelligence All Ways, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

ianmillerblog

Smile! You’re at the best WordPress.com site ever

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Political Reactionary

Dark Enlightenment and Neoreaction

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

Sean Carroll

in truth, only atoms and the void

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Footnotes to Plato

because all (Western) philosophy consists of a series of footnotes to Plato

Patrice Ayme's Thoughts

Striving For Ever Better Thinking. Humanism Is Intelligence Unleashed. From Intelligence All Ways, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

ianmillerblog

Smile! You’re at the best WordPress.com site ever

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Political Reactionary

Dark Enlightenment and Neoreaction

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

Sean Carroll

in truth, only atoms and the void

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Footnotes to Plato

because all (Western) philosophy consists of a series of footnotes to Plato

Patrice Ayme's Thoughts

Striving For Ever Better Thinking. Humanism Is Intelligence Unleashed. From Intelligence All Ways, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

ianmillerblog

Smile! You’re at the best WordPress.com site ever

%d bloggers like this: