Archive for the ‘Artificial Intelligence’ Category

Self-Taught AI Defeats Programmed AI: What Higher Thinking Means

October 19, 2017

SERIOUSLY THINKING ABOUT THINKING means not just mastering AI, but ESCHEWING ANY THINKING WHICH DEPENDS ONLY UPON A FEW EXPLICIT RULES: Artificial Intelligence will help define the highest forms of thinking, and push them higher than ever before. Thus spoke Brainythustra.

Earlier this year the AlphaGo artificial intelligence program ended humanity’s 2,500 years of supremacy at the board game Go. Not content with its 3–0 victory over the world’s top player, AlphaGo creator DeepMind Technologies unveiled on 10/18/2017 its enhanced version—AlphaGo Zero—which soundly thumped its predecessor program in an AI face-off, winning all 100 games played. Thanks to creativity (I will allege, and there is a wisdom therein for us all).

So here we have Artificial Intelligence, self teaching now, spending 40 days playing against itself, and defeating anything programmed by humans. A new program, teaching itself, AlphaGo Zero, needed just three days to invent advanced strategies as yet undiscovered by human players in the multi-millennia history of the game of Go! An essential ingredient towards our termination, lest we get much smarter: to stay ahead of the game, we have to do better than Alpha Go Zero versus Alpha Go Master.

AlphaGo had been taught to play the game of Go by using two different methods. In the first, called supervised learning, researchers fed the program 100,000 top Go games and taught it to imitate that. In the second, called reinforcement learning, researchers had the program play against itself and learn from the results. AlphaGo Zero skipped supervised learning. The AI learned, by itself., without human data, guidance or domain knowledge beyond game rules. After three days, and 4.9 million training games against itself, AlphaGo Zero routed AlphaGo.

The Catholic ex-seminarian, the real Nazi philosopher Heidegger, extra conjugal lover of the (secular Jew) Hannah Arendt, explained that he struggled to define what he was doing, and who he was. Later he elected to call himself a “thinker”, rather than simply a “philosopher”. Heidegger may have been too optimistic in his own case, but the fact is, “wisdom” is not just what “thinking” produces, but what superior thinking produces. Yes, superior, like in above. A superior race of thinking, so to speak, what Heidegger aspired to.

Philosophy is good, thinking is better.

All you see here is programmable, and that means it’s nonlinear programming (as it acts upon itself. Moreover, the Quantum looms in the fine details of the machinery, introducing an unpredictable, nonlinear, nonlocal ingredient as Deus Ex Machina

Thinking is essentially a phenomenon of abstraction revealing the mysterious hierarchies of cause and effect ruling the universe.

Google purchased the company “DeepMind” and is now is studying “Deep Learning”. I must admit it seems to be doing enough of an excellent job at it, to feed the philosophy of thinking and creativity (by supporting experimentally for all to see, strategies of creativity I long believed in).

How did AlphaGo Zero become so dominant? Learning. Unlike the original AlphaGo, which DeepMind trained with human knowledge and supervision, the new system’s algorithm taught itself to play well. Self-taught. The system was not taught to imitate what humans had previously done. That’s the key.

Computer programs, so far, recognize faces, select or correct trajectories, make purchasing recommendations, parallel park cars from “learning algorithms,” written by humans who feed massive amounts of data into an artificial neural networks. 

This is not new: the deliberate mimicking of neural networks by computer systems goes all the way back to the 1940s. This is called machine learning. In AlphaGo’s case it involved analyzing millions of moves made by human go experts and playing many, many games against itself to reinforce that learning. AlphaGo defeated Ke Jie—then the world’s top human go player—It also beat other grand masters such as Lee Sedol, with the aid of multiple neural networks requiring 48 Tensor Processing Units (TPUs)—specialized microchips for neural network training.

AlphaGo Zero’s training involved only four TPUs and a single neural network that knew nothing much about Go, besides the basic rules. The AI learned without supervision—it simply played against itself, and soon was able to anticipate itself and how moves would affect a game’s outcome (as we do in dreaming).

“This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” opined DeepMind co-founders Demis Hassabis and David Silver. “If similar techniques can be applied to other structured problems such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society,” their blog and their Nature article, “Mastering the game of Go without human knowledge”, say, insisting that “a long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains”.

AlphaGo Zero devised unconventional strategies. Go is typically played using “stones” colored either black or white on a board with a 19 by 19 grid. Each player places stones with the objective of surrounding an opponent’s. AlphaGo Zero discovered, played and ultimately learned to prefer, series of new joseki [corner sequence] previously unknown. Go games typically start with plays in the grid’s corners, to gain a better overall position on the board. Move 37 in the second game against Lee Sedol showed the creativity of AlphaGo and the potential of AI is widely recognized as “rare and intriguing”by professional Go players.

Here we touch something that has been central to my thinking, ever since I seriously think about thinking: topmost human thinking is not about what is measured easily. Topmost human thinking is not about what is programmed easily, and ruled easily. This is what the triumphs of AI show us.

That’s why I have secretly scoffed QI (however flatteringly towering it got my mandatory experience through QI once showed me most of it is BS, as i spent time meta-analyzing the test itself). I also look down on all brainy games, such as chess, because, precisely, they are not brainy enough. Same objections to most.fiction literature. I like to play chess, just as I like tennis, but it’s no proof of intelligence, or even of a correctly functioning brain. The same objection can be ruled out, against (much, not most) mathematics itself. If you want full brains, you have to get smarter.   

DeepMind claims “a very impressive technical result; and both their ability to do it—and their ability to train the system in 40 days, on four TPUs—is remarkable,” says Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence (AI2) (founded by Microsoft co-founder Paul Allen in 2014). “While many have used [reinforcement learning] before, the technical aspects of the work are novel.”

However Etzioni says “I think it would be a mistake to believe that we’ve learned something general about thinking and about learning for general intelligence,” he adds. “This approach won’t work in more ill-structured problems like natural-language understanding or robotics, where the state space is more complex and there isn’t a clear objective function.”

Unsupervised training is the key to ultimately creating AI that can think for itself, Etzioni says, but “more research is needed outside of the confines of board games and predefined objective functions” before computers can really begin to think outside the box.

Of course, for thinking out of the box, we need something that is adverse to boxes, and is actually always out of anything box we try to stuff it in.

And the answer is…

The Quantum.

Quantum computers are the key to maximally innovative intelligence. Precisely because the Quantum recognizes no bounds. Just as real intelligence. And real consciousness. Verily, that’s no accident, but consequence.

In conclusion, let me reinforce what we learned here experimentally, because it has great philosophical import: maximal human creativity requires, first, tabula rasa. Hence all human mental activities not resting on tabula rasa should be viewed as belonging to a lower, more menial sort. This is real progress in thinking about thinking, and how to make it better…

Patrice Ayme’



March 10, 2016

What Characterizes Human Intelligence?


We had a president Obama running amok with his “signature strikes” with half-blind drones with pixelated vision killing civilians, far from battle fields, in far-away lands. These crimes full of technological arrogance gave a bad name to Artificial Intelligence. Are we far from robots running amok? It’s clear that the Obamas of this world will have to be reined in.

The (Korean) world champion of the famous Chinese game “Go” was beaten by a Google computer: “I am very surprised because I have never thought I would lose. I didn’t know that AlphaGo would play such a perfect Go.” The champ looked a bit frazzled, but not as angry as Gary Kasparov, the world chess champion, when he was beaten by an IBM computer program, DeepBlue. Kasparov stormed out of the room.

Kasparov’s anger was not an intelligent reaction, because it was obvious, all along, that chess is not such an intelligent game that a simple machine cannot do better. If you want a really  intelligent game, try to become really ethical (vote for Sanders, not the corrupt one). Ethics? A supremely human game where my friend Obama failed miserably. He and his toys, armies of drones and plutocrats.

The Artificial Neural Networks We Build Do Not Grow Naturally. And Their Neuronal Nodes Are Simplistic Relative To Real Neurons. Real Neurons Are Environmentally Sensitive Self Building Micro Computers.

The Artificial Neural Networks We Build Do Not Grow Naturally. And Their Neuronal Nodes Are Simplistic Relative To Real Neurons. Real Neurons Are Environmentally Sensitive Self Building Micro Computers.

“Go” is 3,000 years old. A Go board is 19 by 19, a Chess board, is 8 by 8. People who love to sound scientific say: “Go has more combinations that there are atoms in the universe” (reality check: we don’t know how big the universe is, so we cannot know how many atoms are therein!)

DeepBlue used brute force to beat Kasparov. With “Go”, the breakthrough came from using neural networks. Neural networks can be made to learn. The computer used a program called “Alphago” (devised by my whipping boy, Google, which I congratulate, for once!)  “Alphago” had to use something closer to “INTUITION”, some even say, imagination.


Does Patrice “Make Things Up”? I Hope So!

A few days ago, I pointed out to some would-be Stoics that the trite rejoinder of his admirers that Marcus Aurelius was the first emperor “with a natural born son” was a grotesque lie. I rolled out counterexamples, complete with the names of various sons…

All these sons were not named emperors-to-be, by their doting fathers. Only Marcus Aurelius did that This is of considerable import, because Marcus Aurelius is viewed as a pinnacle of wisdom by a large following (Marcus is the Muhammad of Stoicism).

Whereas I claim that, when Aurelius named his five year old son second in command in the empire (“Caesar”), contrarily to all Roman tradition, Marcus Aurelius showed he was anything but wise. Insane maniac, would-be king, violating the Republic is more like it. In particular, the two emperors just prior to Marcus Aurelius had more than three sons and grandsons, yet nominated none of them as successors when they were children. Although Marcus did. (Even the kings of Saudi Arabia don’t really do this!)

That, in turn, shows that Marcus’ followers have a serious problem evaluating reality. And sure they do.

A philosopher with a prestigious chair reacted angrily, accused me in public of “MAKING THINGS UP”. Even as a self-described “stoic” he could not take the reality of all these sons anymore.

Of course, I did not make anything up, in this particular case. I shoot vicious minds to kill, or, at least, maim. It’s best done with the truth.

But accusation got me to think. Do I make things up? That’s one beautiful thing about nature and its dangerous animals: even rattlesnakes can help me to think. Especially rattlesnakes.

The obvious glared back to me: even to find the truth, one has to make things up. First make things up (that’s imagination, which is most important, as Einstein pointed out). That’s making a theory. Or, in the deep cases, making a new neural networks (this is the part where intuition, that is emotion enters, as it is exactly what builds the network). Then checks that this new theory fits the truth (that’s the part where the network learn).

In the case of Aurelius, after revering him for a few decades, I came across facts and quotes which changed my emotional disposition relative to him. Instead of staying a psychological prisoner of his “Meditations”, I became an hostile witness, and explored facts which would demonstrate Marcus Aurelius’ viciousness. I found plenty (including the “natural son” story).



My theory of the mind is simple: impelled by genetics and epigenetics (both in the most general sense imaginable) plus the environment, neural circuitry gets elaborated in an attempt to make mini models of pieces of nature within the brain. So mental circuits are (SORTS OF) answers to the environment.

“Sort of” is crucial: it means the neural circuitry elaborated in reaction will often NOT be (capable of being) a faithful (enough) model of the environment. That’s literally impossible, but that discrepancy is precious.

That discrepancy is the difference between what the neural circuitry impelled by the (perceived) environment and said (real) environment, is human creativity.

(I say “human”, for ease of conceptualization, but actually I should say “animal intelligence”.)

What is going on with Artificial Neural Network machines? They learn, as we do through what is called the Hebbian mechanism.

How to explain neural network learning in the simplest terms? Basically, in very rough first approximation, imagine the neural network is a canal system (made of canal which can be eroded). Suppose one wants an output: more water through a desired exit gate. Suppose one augment the flow there (say by lowering that exit gate). The canal network will adjust itself to maximize output.

However, we, very intelligent animals use a META-HEBBIAN mechanism of neuronal network genesis. In Artificial Neural Networks, the network is given, and then it learns: the neural circuit is provided presently by humans to become part of a machine.

The machine does not make it itself. But we do.

Human brains literally make things up, because we objectively, physically, make our neural networks up. We do not just tweak our networks. The networks which characterize our highest intelligence are themselves answers to the environment we are in.

To make a neural network we use emotions: it is known that emotional activity drives dendrite growth, thanks to glial activity.

These neural networks’ construction is tightly controlled from the outside, not just by the environment in the most general sense, but, essentially, by what we call culture. Culture is the set of schematics of the networks which work.


So, when we want to explore if machines could become as clever as human beings, we have to ask: could machines be devised to make things up? Could machines be devised which would make their own artificial neural networks?

Many of our fundamental neural networks (such as those controlling breathing) from “genetics” (in the most general sense). Those arise semi-automatically (with minimal back and forth with the environment). However, we make our own most sophisticated neural networks from the emotions which guide their architecture. Emotions are organized topologically, with NON-METRIC topology.

Unfortunately, or fortunately, and certainly worryingly, yes, we could make machines which have their own emotions which build their own neural networks. There is no reason to think we could not build such machines. They probably would have to use artificial neurons, etc. (And why not real neurons?)

The superiority of the human mind comes from making things up, or making ourselves up. Such machines would be similar.

Technologies, the special discourses, are our genus’ genius. Technologies made our genus possible, for at least three million years. Artificial, creative intelligence is more of the same, generating what we become. Not only we are becoming gods, but gods we cannot even imagine.

Imagination is when we make things up. It entails the construction of neural networks which will constitute what future knowledge is made of. This is why imagination is more important than knowledge. Because, without imagination, all the knowledge we would have would reflect neither creativity, nor even will.

Oh, by the way, should we panic? No. But it means that clueless individuals such as the ethically challenged Obama should not have the powers he had under stupid and Nazi-like technology such as drones used to kill civilians. It’s not a matter of replacing Obama by Sanders (although that would be a good idea).

We need a revolution (as Sanders say). We are going to get, in any case, a technological revolution. Intelligence is going to become a science.

But that intelligence revolution has to be about direct democracy fed by the best information possible, that is, total transparency, the exact opposite of the world the malefactor manufacturer Apple is proposing to us. And Obama in all this? He has only a few months to atone for the crimes he committed with the wanton usage of high tech he made. But first, he would have to realize how egregious they were.

This goes well beyond drones. Having the correct ethics will be fundamental for the safe and effective deployment of all too human artificial intelligence.

Patrice Ayme’