The double role of AI

Artificial intelligence plays a dual role in AI NAVI. On the one hand, AI systems are studied and used to master complexity. On the other hand, AI systems contribute to complexity.

One of the keywords that has come up again and again in recent years in relation to changes in the political landscape is that of filter bubbles. The metaphor of the filter bubble, or the so-called filter bubble theory, was introduced in 2011 by the media scientist Eli Pariser (Pariser, 2011) and claims a connection between the recommendation algorithms of large digital media and increasing radicalisation. For example, the algorithms used by Google, YouTube or Facebook to suggest what content users might also be interested in would, as a result of an interest in certain topics, offer users increasingly radical content on the same topic. For example, if someone was interested in whether the moon landing was real, content would increasingly be suggested that dealt more and more intensively with the proof that it was a conspiracy.

The difficulty of the filter bubble metaphor

The difficulty with this statement, other than merely suggesting the phenomenon, is actually proving it. While the basic intuition cannot be dismissed, a more detailed examination of the phenomenon presents great difficulties. In fact, it often seems that it is not primarily the AI algorithms, i.e. the recommender systems, that lead to radicalisation, but a combination of the business goals and these recommender systems that lead to content radicalisation.

In particular, a significant change in the recommender system in 2015 on the video platform YouTube seems to have contributed to a radicalisation of content. In doing so, YouTube changed the weighting of different aspects of the rating of a video. While previously the trustworthiness of the producer was of great importance, in that subscriber numbers of the channel were rated highly, with the innovation the importance of subscriber numbers was reduced and in contrast the so-called click-through-rate of videos was weighted higher, i.e. the concrete user behaviour with this specific video, for example how often users do not watch the video to the end. This led to a significant change in which videos penetrated to many users. Conversely, this created an evolutionary pressure to adapt, so that content that could adapt to the new conditions penetrated users, while content that could not adapt to the new conditions was increasingly marginalised.

The changes were, of course, influenced by YouTube’s business goals of keeping users on YouTube as long as possible. A side effect was that the recommender systems now gave high ratings to content that was particularly shrill and emotionalising, but presented within 10 minutes and thus heavily abbreviated. It was not the specific preference for obscure or marginal content, but the preference for its style of making that brought many such marginal media creators into focus. Naturally, a biotope of other media creators then developed who wanted to profit further from the success of individual videos and media creators and thus gave the particularly obscure topics a further resonance space.

It therefore seems that it is by no means only certain functionalities of the underlying algorithms that promoted the phenomenon of radicalisation, but primarily certain business practices that contributed to an economic environment in which the provision of suitable content becomes an economic necessity for its producers. One might think of the phenomenon of search engine optimisation (SEO) experts, where people specialise in advising companies or organisations on how to prepare their content on websites in such a way that it is perceived as particularly important by Google crawlers and page-rank algorithms.

Thus, the repercussions of the new media on society are also effects of a complex system in which the technical functions of evaluation algorithms, user behaviour and especially the business goals of the providers produce emergent phenomena such as a clear radicalisation of the political landscape.

AI as a source of complexity in AI NAVI

AI systems do not primarily appear in AI NAVI in the form of such recommender or rating systems. However, this does not at all mean that the feedback effects of AI systems, their user behaviour and the underlying intention of their use do not also lead to undesirable emergent effects. Applications such as the Corona-Warnapp or systems for behavioural adaptation always have side effects on the behaviour of their users through the effect of being embedded in social reality.

This also applies to AI systems in AI NAVI, which become a source of complexity. Smart applications for climate change thus contribute to dealing with climate change not only directly, but also indirectly through the increase in complexity. This role of AI is always taken into account in the research design of AI NAVI and flows significantly into the conception of the “socially informed neural networks”.

AI as a solution to complexity

Provided that the effect of AI algorithms on increasing complexity is also taken into account, AI research on personalised applications can also make a significant contribution to making the complexity of systems such as the climate or pandemics more manageable for the individual. Exploring this possibility is one of the central goals of AI NAVI. However, implementation requires answering central questions such as what role active or passive use of AI plays.

The central example of this is navigation systems, which on the one hand relieve their users of the need to carry out elaborate research for the exact route to a certain place, but conversely thus hand over to the machine the possibilities for action that would come from their own creative activity. The way in which people should deal with complex systems with the support of AI should therefore not follow a merely passive use, so that users do not turn right at the wrong place.


Pariser, E. (2011) The Filter Bubble: What The Internet Is Hiding From You. Penguin Press Limited, New York