The world is facing major change, and not just in a geopolitical sense. With Trump in the US, Putin in Russia and his war of aggression in Ukraine, and the behavior of dictators worldwide, the international structure is in upheaval. But new, powerful technologies are also contributing to this, with artificial intelligence at the forefront.
Hundreds of billions are currently being invested in building data centers to control chatbots, AI agents, and physical AI in the form of autonomous cars or robots. It is a gold rush and one of the greatest technological changes we have ever experienced in human history.
And this is giving rise to fears. Are jobs at risk? What if AI makes a mistake? Shouldn’t we declare a moratorium or even a ban on this technology?
The Familiar and Accepted Aspects of Old Technologies
Such fears always arise when the risks of a new technology cannot (yet) be assessed. It doesn’t matter that more than a million people worldwide are killed in car accidents every year and 10 to 12 million are injured. We are familiar with this technology, we use it every day, we know its value, and yes, we also know how it can be used in wars, how some people lose control and deliberately drive into others, and how some people unintentionally cause harm to others. But we accept it because the technology is so useful. However, traffic accidents don’t make the front page because they are hardly sensational, unless there is an unprecedented number of victims in an accident. Then there are loud calls for even stricter rules and higher penalties, only for it to quickly disappear from the public consciousness again.
The Unknowns of New Technologies
With new technology, on the other hand, we know neither its advantages nor its risks. How it can help us is still largely unknown, because it is only just beginning to be used in practice and very few people have had extensive experience with it. For many people, their imagination about how it can be used positively reaches its limits. This is a phenomenon that probably also has to do with the system in which we operate.
But when it comes to risks, our imagination begins to run wild. We can think of many things that could go wrong, and we don’t shy away from even the most absurd scenarios. A good example is the trolley problem, which comes up again and again in the context of autonomous cars. If you ask people about their own experiences, how often they have faced this trolley problem themselves, whether they have ever heard of it or practiced for it in driving school, the answer is always no.
Covers
All of this manifests itself in cover stories that paint dystopian scenarios of doom. German weekly DER SPIEGEL has once again taken the cake with its current cover. A Terminator, as if the image itself weren’t enough, accompanied by the title “Deadly Intelligence,” all in dark, menacing colors.

DER SPIEGEL is thus continuing its own long tradition of doom-laden covers. On the subject of robots alone, there have been a number of worrying cover images predicting the imminent end of humanity. And did it happen? Of course not. We’re still waiting for it.

What is never really discussed, and if it is, then it doesn’t make the front page, are the real dangers that already exist today. The following two covers never appeared, even though many more people die from these two tools. Every day.
DER SPIEGEL is by no means the only media outlet to feature negative headlines about AI on its front page. There is repeated talk of the “AI bubble” that is likely to burst soon.

A telling example was the interview a few days ago on the Austrian news program ZIB2, where host Armin Wolf had Peter Steinberger, the developer of the AI agent software OpenClaw, as a guest. In the 24 minutes, Armin Wolf primarily asks questions about the risks and dangers of the technology; he does not ask a single question about how it can be used positively.
Wolf justified this by saying that it is his job as a journalist to ask critical questions. But he failed to ask perhaps the most important one: why someone as talented as Peter Steinberger cannot be kept in Austria and is emigrating to the US, where he is now taking up a position with ChatGPT’s manufacturer, OpenAI.
Media Schizophrenia
On the other hand, the same media outlets complain about their own industry and country lagging behind in the development and application of artificial intelligence. Important comments, such as how a talent like Peter Steinberger “settles scores” with Europe, are written without realizing that this predominantly negative reporting on AI by their own editorial staff has led to precisely this negative and fearful perception among the public.
The media is doing its country a disservice. If sentiment analysis of AI reporting shows that between 60% and 90% of articles are negative about AI, if cover stories primarily paint the dangers and risks in the darkest colors, then it is not entirely innocent that domestic talent goes where it can make a positive contribution, and their own industry, politics, and society hesitate to apply it, immediately put a stop to the technology, and people want nothing to do with it. However, this will not be enough to maintain the country’s position as an industrial and economic location.
The media bear responsibility for this future through the way they report. At present, they are failing in this regard.


