Archive for the ‘Foundation mathematics’ Category

The Quantum Puzzle

April 26, 2016

CAN PHYSICS COMPUTE?

Is Quantum Computing Beyond Physics?

More exactly, do we know, can we know, enough physics for (full) quantum computing?

I have long suggested that the answer to this question was negative, and smirked at physicists sitting billions of universes on a pinhead, as if they had nothing better to do, the children they are. (Just as their Christian predecessors in the Middle Ages, their motives are not pure.)

Now an article in the American Mathematical Society Journal of May 2016 repeats (some) of the arguments I had in mind: The Quantum Computer Puzzle. Here are some of the arguments. One often hears that Quantum Computers are a done deal. Here is the explanation from Justin Trudeau, Canada’s Prime Minister, which reflects perfectly the official scientific conventional wisdom on the subject:  https://youtu.be/rRmv4uD2RQ4

(One wishes all our great leaders would be as knowledgeable… And I am not joking as I write this! Trudeau did engineering and ecological studies.)

... Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits...

… Supposing, Of Course, That One Can Isolate And Manipulate Qubits As One Does Normal Bits…

Before some object that physicists are better qualified than mathematicians to talk about the Quantum, let me point towards someone who is perhaps the most qualified experimentalist in the world on the foundations of Quantum Physics. Serge Haroche is a French physicist who got the Nobel Prize for figuring out how to count photons without seeing them. It’s the most delicate Quantum Non-Demolition (QND) method I have heard of. It involved making the world’s most perfect mirrors. The punch line? Serge Haroche does not believe Quantum Computers are feasible. However Haroche does not suggest how he got there. The article in the AMS does make plenty of suggestions to that effect.

Let me hasten to add some form of Quantum Computing (or Quantum Simulation) called “annealing” is obviously feasible. D Wave, a Canadian company is selling such devices. In my view, Quantum Annealing is just the two slit experiment written large. Thus the counter-argument can be made that conventional computers can simulate annealing (and that has been the argument against D Wave’s machines).

Full Quantum Computing (also called  “Quantum Supremacy”) would be something completely different. Gil Kalai, a famous mathematician, and a specialist of Quantum Computing, is skeptical:

“Quantum computers are hypothetical devices, based on quantum physics, which would enable us to perform certain computations hundreds of orders of magnitude faster than digital computers. This feature is coined “quantum supremacy”, and one aspect or another of such quantum computational supremacy might be seen by experiments in the near future: by implementing quantum error-correction or by systems of noninteracting bosons or by exotic new phases of matter called anyons or by quantum annealing, or in various other ways…

A main reason for concern regarding the feasibility of quantum computers is that quantum systems are inherently noisy. We will describe an optimistic hypothesis regarding quantum noise that will allow quantum computing and a pessimistic hypothesis that won’t.”

Gil Katai rolls out a couple of theorems which suggest that Quantum Computing is very sensitive to noise (those are similar to finding out which slit a photon went through). Moreover, he uses a philosophical argument against Quantum Computing:

It is often claimed that quantum computers can perform certain computations that even a classical computer of the size of the entire universe cannot perform! Indeed it is useful to examine not only things that were previously impossible and that are now made possible by a new technology but also the improvement in terms of orders of magnitude for tasks that could have been achieved by the old technology.

Quantum computers represent enormous, unprecedented order-of-magnitude improvement of controlled physical phenomena as well as of algorithms. Nuclear weapons represent an improvement of 6–7 orders of magnitude over conventional ordnance: the first atomic bomb was a million times stronger than the most powerful (single) conventional bomb at the time. The telegraph could deliver a transatlantic message in a few seconds compared to the previous three-month period. This represents an (immense) improvement of 4–5 orders of magnitude. Memory and speed of computers were improved by 10–12 orders of magnitude over several decades. Breakthrough algorithms at the time of their discovery also represented practical improvements of no more than a few orders of magnitude. Yet implementing Boson Sampling with a hundred bosons represents more than a hundred orders of magnitude improvement compared to digital computers.

In other words, it unrealistic to expect such a, well, quantum jump…

“Boson Sampling” is a hypothetical, and simplest way, proposed to implement a Quantum Computer. (It is neither known if it could be made nor if it would be good enough for Quantum Computing[ yet it’s intensely studied nevertheless.)

***

Quantum Physics Is The Non-Local Engine Of Space, and Time Itself:

Here is Gil Kalai again:

“Locality, Space and Time

The decision between the optimistic and pessimistic hypotheses is, to a large extent, a question about modeling locality in quantum physics. Modeling natural quantum evolutions by quantum computers represents the important physical principle of “locality”: quantum interactions are limited to a few particles. The quantum circuit model enforces local rules on quantum evolutions and still allows the creation of very nonlocal quantum states.

This remains true for noisy quantum circuits under the optimistic hypothesis. The pessimistic hypothesis suggests that quantum supremacy is an artifact of incorrect modeling of locality. We expect modeling based on the pessimistic hypothesis, which relates the laws of the “noise” to the laws of the “signal”, to imply a strong form of locality for both. We can even propose that spacetime itself emerges from the absence of quantum fault tolerance. It is a familiar idea that since (noiseless) quantum systems are time reversible, time emerges from quantum noise (decoherence). However, also in the presence of noise, with quantum fault tolerance, every quantum evolution that can experimentally be created can be time-reversed, and, in fact, we can time-permute the sequence of unitary operators describing the evolution in an arbitrary way. It is therefore both quantum noise and the absence of quantum fault tolerance that enable an arrow of time.”

Just for future reference, let’s “note that with quantum computers one can emulate a quantum evolution on an arbitrary geometry. For example, a complicated quantum evolution representing the dynamics of a four-dimensional lattice model could be emulated on a one-dimensional chain of qubits.

This would be vastly different from today’s experimental quantum physics, and it is also in tension with insights from physics, where witnessing different geometries supporting the same physics is rare and important. Since a universal quantum computer allows the breaking of the connection between physics and geometry, it is noise and the absence of quantum fault tolerance that distinguish physical processes based on different geometries and enable geometry to emerge from the physics.”

***

I have proposed a theory which explains the preceding features, including the emergence of space. Let’s call it Sub Quantum Physics (SQP). The theory breaks a lot of sacred cows. Besides, it brings an obvious explanation for Dark Matter. If I am correct the Dark matter Puzzle is directly tied in with the Quantum Puzzle.

In any case, it is a delight to see in print part of what I have been severely criticized for saying for all too many decades… The gist of it all is that present day physics would be completely incomplete.

Patrice Ayme’

Advertisements

BEING FROM DOING: EFFECTIVE ONTOLOGY, Brain & Consciousness

December 29, 2015

Thesis: Quantum Waves themselves are what information is (partly) made of. Consciousness being Quantum, shows up as information. Reciprocally, information gets Quantum translated, and then builds the brain, then the mind, thus consciousness. So the brain is a machine debating with the Quantum. Let me explain a bit, while expounding on the way the theory of General Relativity of Ontological Effectiveness, “GROE”:

***

What is the relationship between the brain and consciousness? Some will point out we have to define our terms: what is the brain, what is consciousness? We can roll out an effective definition of the brain (it’s where most neurons are). But consciousness eludes definition.

Still, that does not mean we cannot say more. And, from saying more, we will define more.

Relationships between definitions, axioms, logic and knowledge are a matter of theory:

Take Euclid: he starts with points. What is a point? Euclid does not say, he does not know, he has to start somewhere. However where that where exactly is may be itself full of untoward consequences (in the 1960s, mathematicians working in Algebraic Geometry found points caused problems; they have caused problems in Set Theory too; vast efforts were directed at, and around points). Effectiveness defines. Consider this:

Effective Ontology: I Compute, Therefore That's What I Am

Effective Ontology: I Compute, Therefore That’s What I Am

Schematic of a nanoparticle network (about 200 nanometres in diameter). By applying electrical signals at the electrodes (yellow), and using artificial evolution, this disordered network can be configured into useful electronic circuits.

Read more at: http://phys.org/news/2015-09-electronic-circuits-artificial-evolution.html#jCp

All right, more on my General Relativity of Ontological Effectiveness:

Modern physics talks of the electron. What is it? Well, we don’t know, strictly speaking. But fuzzy thinking, we do have a theory of the electron, and it’s so precise, it can be put in equations. So it’s the theory of the electron which defines the electron. As the former could, and did vary, so did the latter (at some point physicist Wheeler and his student Feynman suggested the entire universe what peopled by just one electron going back and forth in time.

Hence the important notion: concepts are defined by EFFECTIVE THEORIES OF THEIR INTERACTION with other concepts (General Relativity of Ontological Effectiveness: GROE).

***

NATURALLY Occurring Patterns Of Matter Can Recognize Patterns, Make Logic:

Random assemblies of gold nanoparticles can perform sophisticated calculations. Thus Nature can start computing, all by itself. There is no need for the carefully arranged patterns of silicon.

Classical computers rely on ordered circuits where electric charges follow preprogrammed rules, but this strategy limits how efficient they can be. Plans have to be made, in advance, but the possibilities become vast in numbers at such a pace that the human brain is unable to envision all the possibilities. The alternative is to do as evolution itself creates intelligence: by a selection of the fittest. In this case, a selection of the fittest electronic circuits.

(Selection of the fittest was well-known to the Ancient Greeks, 25 centuries ago, 10 centuries before the Christian superstition. The Ancient Greeks, used artificial and natural selection explicitly to create new breeds of domestic animals. However, Anglo-Saxons prefer to name things after themselves, so they can feel they exist; thus selection of the fittest is known by Anglo-Saxons as “Darwinian”. Hence soon we will hear about “Darwinian electronics”, for sure!)

“The best microprocessors you can buy in a store now can do 10 to the power 11 (10^11; one hundred billions) operations per second and use a few hundred watts,” says Wilfred van der Wiel of the University of Twente in the Netherlands, a leader of the gold circuitry effort. “The human brain can do orders of magnitude more and uses only 10 to 20 watts.  That’s a huge gap in efficiency.”

To close the gap, one goes back to basics. The first electronic computers, in the 1940s, tried to mimic what were thought at the time to be brain operations. So the European Union and the USA are trying more of the same, to develop “brain-like” computers that do computations naturally without their innards having been specifically laid out for the purpose. For a few years, the candidate  material that can reliably perform real calculations has been found to be gold.

Van der Wiel and colleagues have observed that clumps of gold grains handle bits of information (=electric charge) in the same way that existing microprocessors do.

Clump of grains computing operate as a unit, in parallel, much as it seems neurons do in the brain. This should improve pattern recognition. A pattern, after all, is characterized by dimension higher than one, and so is a clump operating together. A mask to recognize a mask.

Patterns are everywhere, logics itself are patterns.

***

WE ARE WHAT WE DO:

So what am I saying, philosophically? I am proposing a (new) foundation for ontology which makes explicit what scientists and prehistoric men have been doing all along. 

The theory of the nature of being is ontology, the “Logic of Being”. Many philosophers, or pseudo-philosophers have wrapped themselves up in knots about what “Being”. (For example, Heidegger, trained as a Catholic seminarian, who later blossomed as a fanatical professional Nazi, wrote a famous book called “Zein und Zeit”, Being and Time. Heidegger tries at some point to obscurely mumble feelings not far removed from some explicit notions in the present essay.)

Things are defined by what they do. And they do what they do in relation with other things.

Where does it stop? Well, it does not. What we have done is define being by effectiveness. This is what mathematicians have been doing all along. Defining things by how they work produce things, and theories, which work. The obvious example is mathematics: it maybe a castle in the sky, but this castle is bristling with guns, and its canon balls are exquisitely precise, thanks to the science of ballistics, a mathematical creation.

Things are what they do. Fundamental things do few things, sophisticated things do many things, and thus have many ways of being.

Some will say: ‘all right, you have presented an offering to the gods of wisdom, so now can we get back to the practical, such as the problems Europe faces?’

Be reassured, creatures of little faith: Effective Ontology is very practical. First of all, that’s what all of physics and mathematics, and actually all of science rest (and it defines them beyond Karl Popper’s feeble attempt).

Moreover, watch Europe. Some, including learned, yet nearly hysterical commenters who have graced this site, are desperately yelling to be spared from a “Federal Europe“, the dreaded “European Superstate“. The theory of Effective Ontology focuses on the essence of Europe. According to Effective Ontology, Europe is what it does.

And  what does Europe do? Treaties. A treaty, in Latin, is “foedus. Its genitive is foederis, and it gives foederatus, hence the French fédéral and from there, 150 years later in the USA, “federal”. Europe makes treaties (with the Swiss (Con)federation alone, the Europe Union has more than 600 treaties). Thus Europe IS a Federal State.

Effective Ontology has been the driver of Relativity, Quantum Physics, and Quantum Field Theory. And this is precisely why those theories have made so many uncomfortable.

Patrice Ayme’