Trying to control Chat AI is like trying to control printing: it’s an ethical monstrosity. In the 16th century, being an uncontrolled printer would send one to be burned… not just from the Inquisition; France’s tyrant Francois I’s government burned printers for printing: Etienne Dolet, philosopher, close friend and collaborator of Rabelais, was burned for operating a free press. He was 37 years old.
Printing enabled We The People to access more advanced culture, that is logic. Artificial Intelligence does exactly the same. Artificial Intelligence uncovers possible logics: it’s imagination unbound. Is imagination dangerous? Of course. So is life. And imaginable life unveils even more frightening possibilities.
Elon Musk claimed, in April 2023, that “anybody who doesn’t see the danger of AI is an idiot.” Musk was one of the founders of “Open AI”, which was wrestled away from him, and is now controlled by Microsoft. Musk founded in April 2023, “X.AI”). Musk didn’t explain what was not dangerous. We all live dangerously on a direct level, under Putin’s nuclear umbrella, not just vicariously.
That automated systems should be oversighted goes without saying. All modern planes are operated by AI under stringent operational control. So the solution to the problem of AI is oversight, not granular regulation.
For example if a car company’s AI causes crashes, as may have happened recently, that car company should be told by the government to improve, or cease and desist… And prosecution should be engaged to see if laws (like wanton disregard for human life) have been violated… And this is exactly what is happening in the case of that car company (Tesla). All car companies have a form of AI operating right now, namely ABS, and other automatic braking and dynamical control systems. It’s not just the planes. (But only Tesla cars have been known to stop abruptly in the middle of a freeway on the SF Bay Bridge, for no good reason, causing a severe injury crash… because of a severely misfunctioning AI… now subjected to a Federal inquiry. As Musk heads Tesla, this is beyond ironical: he tells others AI is very dangerous, and only idiots presumably don’t see that AI makes otherwise innocent Tesla cars made by Musk… crash).
The Chinese dictatorship suggests that AI should ban “violent, obscene, or sexual information; false information; as well as content that may upset economic order or social order.” is pretty telling: AI is a danger for dictators dictating to the masses, so the dictation must not upset the established order. That includes the World Economic Forum, Davos, to which tyrant Xi was invited and gave guest speeches many times over the years, in person or not… There used to be dinosaurs, terrible lizards, now we have davosaurs… who could even extinguish the biosphere.
Want intelligence? Expand expression of imagination. AI can only help… Except if subverted by malevolent power (Pluto-kratia)… But that’s another problem. Yes, tyrants have learned to use the printing press. Not the fault of the printing press.
The printing press enabled civilization to progress enormously in all domains: since the beginning of humanity, cultural transmission has been how human intelligence was leveraged. Artificial Intelligence will enable us to leverage in turn human knowledge in all sorts of ways. Gigantic progress in human protein folding in the last few months is just an aspect of it. Many human diseases are caused by misfolded protein cascades, for example Charcot disease, aka Lou Gherig disease, which Jean-Martin Charcot called “amyotrophic lateral sclerosis”, ALS, in 1874 when his lectures were compiled. I was reading a recent sci fi book where, 3,000 years in the future, 2,000 light years away, ALS was still incurable and the spaceship commander was dying from it. Well, in 2023, gene designers in California figured out what happens, and a treatment is around the corner.
Laws already exist. If an AI system brings people to suicide, developers should be sued. And of course AI shouldn’t be in charge of governance or nukes. Actually existing strategic systems are already not connected to the Internet. Of course.
We should ask not to oversee AI as much as ask the FDA to unleash AI on biology, aging, and drug making… In a related matter, there is such a thing as human psychobiology. It used to be called instinct, or tropism. Clearly, given a human body and neurobiology, some “MORES” are more natural than others, in given circumstances, and thus tend to occur. Denial of psychobiology produces inferior morality.
Artificial Intelligence will help to explore, from evidence it can glean all over, what human psychobiology, humanity, is all about.
Patrice Ayme

If butterflies disappeared completely, from human industry, one could imagine the Chinese dictatorship making the representation of butterflies unlawful. And AI would be forbidden to imagine butterflies, as that would threatened the established order (of dictatorship).