For a Living Ocean

Posts tagged “artificial-intelligence

The Machine-Human and the Myth of Superintelligence

We live in an age where AI is celebrated as humanity’s next great leap – the path to artificial general intelligence (AGI), superintelligence and a society that automates everything. But behind the hype lies another story: our creativity is bodily, born of crises and lived experiences, and cannot be programmed. At the same time, we risk becoming increasingly machine-like ourselves, as technology shapes how we see the world and ourselves. The most radical innovation of the future is therefore not a supercomputer – but a human being who refuses to become a machine.

Artificial intelligence dominates today’s technological discourse. Many predict that machines will soon match or surpass our own intelligence, which would render human intelligence obsolete. Billions have been poured into this dream, and the brightest minds are given time and resources to open Pandora’s box.

Proponents of this view, such as Nick Bostrom, argue that because intelligence rests on physical substrates – like the human brain – there is no reason it could not be recreated on a much larger scale, unbound by the confines of the human skull (Bostrom, 2014).

But is this really possible – or are we just building castles in the air?

The Theory of Cyclical Development Undermines the AI Hype

According to the theory of cyclical development – which I have previously presented on this blog – human intelligence and creativity go beyond algorithms. They are part of life itself, deeply rooted in existential experiences and crises, and they are non-deterministic.

Early Homo sapiens learned survival through imitation within a rhythmic song system that maintained extremely durable lifeways over hundreds of thousands of years. Mirror-neuron networks made it possible to accurately imitate toolmaking and movement patterns across generations. During evolution the brain did not grow because we became ever more creative; it grew because we became better at reproducing and consolidating survival strategies that already worked.

The original function of language was therefore, in the form of song and rhythm, to guide people in their daily tasks – to synchronise thought and body, individual and environment – rather than to constantly invent new things. The free, infinite language, the abstract speech we now take for granted, arose much later. Only about 75,000 years ago did the vocal song system shift to a fully symbolic language, unleashing not only new ways of thinking and communicating but also new ways of moving.

When sudden environmental changes made old routines impossible – for example ice-age pulses and above all the Toba super-eruption – the song system “overheated”. Song ran idle, cognitive dissonance arose, experimentation exploded and new tools, art forms and social systems were born.

Language was an invention of innovative people in a tumultuous time – and this applies to all our ancestors in the genus Homo who went through the same cyclical process. Even Homo erectus and Homo heidelbergensis experienced crises and invented new survival strategies, then let the brain and the song cement the new solutions and lifeways, whereupon the brain grew larger again.

This represents a Copernican revolution in how we view human evolution – we flip the coin: larger brains did not evolve to make us ever more ingenious, but to effectively reproduce acquired survival strategies and close the gap between body and thought, human and habitat – through song and mirror neurons. Creativity continued to develop but as a latent reserve beneath the solid surface – ready to break through when the next crisis tore down imitation’s hegemony.

This means that genuine creativity and innovation are deeply rooted in the human body: in emotions, sensory impressions and physical interaction with the world. They arise out of cognitive dissonance – the tension between expectation and experience, between body and soul – and they are not deterministic. They emerge from chaos and are truly free and transformative.

Machines, by contrast, lack frustration, wonder and joy. They manipulate symbols but have no conscious relationship to them.

Human intelligence thus did not emerge through a gradual, deterministic natural selection where generation after generation became ever smarter in response to a changing environment. It arose latently – and when it finally burst forth it was untamed and went beyond the programmable, as part of life’s very lifeblood. This cannot be recreated in a laboratory – no more than we can create life itself.

Large Language Models and Their Limitations

Our largest language models operate strictly within a logical, stripped-down system of propositions. There is hardly any genuine creativity there – only rapid, massive recombination of already existing texts. This stands in sharp contrast to how actual scientific and intellectual breakthroughs occur.

Alfred Russel Wallace had his evolutionary insight while feverish in Indonesia – the idea of natural selection struck like a revelation, not as the result of formal deduction. Albert Einstein described “the happiest thought of my life” when, working at the patent office in Bern, he imagined an observer in free fall – a sudden flash of insight that later led to the general theory of relativity and fundamentally changed our worldview.

Such non-logical breakthroughs lie completely beyond today’s LLM architecture. Today’s AI – even the most sophisticated models – engage in algorithmic manipulation of symbols without lifeblood. They lack the existential creativity that only appears when imitation’s chains break, when body and thought collide and free language arises. AGI presupposes a “brain in a box” that can evolve without a physical, existential embodiment. All of this AI lacks.

The Machine-Human – The Real Danger

Ironically, it is not technology that is becoming more human today, but humans who risk becoming more machine-like. Philosopher Martin Heidegger spoke of technology’s ability to “unconceal” reality – to make us see the world in a new way. In the new computerised era we are reduced to data points, patterns and functions that can be optimised. When the world is revealed through the grid of technology we begin to see ourselves as resources, as machinery.

This has two crucial consequences:

  1. Loss of human dignity. If humans are seen as just another algorithm, the foundation of our inviolability dissolves. People become interchangeable, measurable and comparable in the same logic as raw materials and means of production. We are seen as programmable automatons – a view that threatens to dehumanise us and in the long run erode human rights. If we start to see people as fallible machines rather than moral subjects, we open the door to a society where the value of each person can be measured, priced and – in the worst case – switched off.
  2. Humans become technology. In our drive for efficiency and optimisation we ourselves become increasingly machine-like. We live by measurable rules, let algorithms guide our decisions and make ourselves ever more bound by laws. It is not technology that becomes human – it is we who become technology.

Heidegger would say that this is the real danger: not technology itself, but that we see the world – and ourselves – only through the grid of technology. We then forget other ways of being human, other ways of living and understanding ourselves.

Heidegger’s allegory of everyday language and poetry offers a powerful tool for understanding this danger. He did not mean that poetry arose from everyday language as its highest form, but rather the opposite – that the original language was poetic, open and full of wonder. The first modern humans were thus grand poets (Heidegger, 1971). It is this poetry, this spiritual and non-physical dimension, that is now being lost in the AI era.

AI Is Like Any Other Technology

AI is fundamentally like any other technology. It is not exceptional; it cannot create by itself but only in interaction with humans – just like writing, the wheel and other groundbreaking innovations.

But the danger is that AI risks reinforcing an already ongoing, dehumanising process – a process that capital accumulation in symbiosis with technology has long driven. Technology shifts boundaries and stretches its tentacles deep into the periphery to suck resources into the centre, just as human ecologist Alf Hornborg has shown (Hornborg, 2022). In the centre, people become intoxicated by the illusion of freedom and dream of eternal life. But in practice we feed the machines – not the humans – something the gigantic, extremely energy-hungry data centres demonstrate with brutal clarity.

Reclaim the Human

Even though we will never achieve either artificial general intelligence or superintelligence – something that will likely soon burst the AI bubble with enormous economic consequences – AI will still become an ever larger part of the infrastructure that feeds injustice and alienation. Economic power will concentrate into even fewer hands, and at the same time we risk changing ourselves: becoming ever more standardised, ever more governed by the logic of algorithms, ever more entangled with technology – and ever less poetic.

The most radical innovation of the future is therefore not a supercomputer – but a human being who refuses to become a machine.

References

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Heidegger, M. (1971). Poetry, Language, Thought. Trans. A. Hofstadter. New York: Harper & Row. (Includes the lecture “…dichterisch wohnet der Mensch…” delivered 1951.)

Hornborg, A. (2022). The Magic of Technology: The Machine as a Transformation of Slavery. London & New York: Routledge.