Alex Penn’s talk at TEDxSouthamptonUniversity, a self-organized TED event at the university of southhampton in 2013, about Systems Aikido as a philosophy for engineering complex systems.
AI NAVI includes four case studies or respectively one case study in four countries.
Basic conceptual embedding in the project
At the core of AI NAVI is the examination of behavioural choices in dealing with complex societal challenges. This makes it necessary to examine the empirical situation more closely. To this end, four case studies will take place within AI NAVI, which will serve to better understand the relationship between behavioural decisions, the use of applications that rely on artificial intelligence algorithms, and societal negotiation.
In the four case study countries, empirical data will be generated in parallel and closely integrated with each other, which will inform the research in WP3, and conversely also provide concrete starting points and details for the latter. It is of central importance that the results of the German case study are complemented by the three other case studies. Therefore, the case studies are closely linked conceptually and organisationally in order to generate data that is comparable with each other, but sufficiently culturally different from the situation in Germany. This makes it necessary to select case study countries where the standard of living is comparable to Germany but have a geographical distribution and cultural background sufficiently different from Germany without deviating too much from each other. The choice of the remaining case study countries therefore fell on the USA, Australia and the UK, which fulfil these conditions.
The two thematic focal points, climate change and pandemics, come into play particularly in the case studies and form the foundation on which the examination of the contexts that AI NAVI investigates takes place.
With the increasing clarity of how complex systems express themselves on societal realities, such as climate change or global pandemics, the idea of innovative, AI-based solutions is increasingly coming to the fore. In a review article (Rolnick et al 2019) aimed at wide circles within and outside academia, the authors call for a research effort into more research on how AI systems can be used to combat climate change. From electricity networks increasingly controlled by AI to the machine-assisted management of forestry plants, the review article contains a variety of possible application areas in which AI systems can be profitably used to combat climate change. Another point is the possibilities of individual behavioural change.
However, ideas to influence behaviour with the help of AI suffer from a fundamental research gap: that the cognitive and social foundations of such AI-based behavioural adaptation are not clear. What would be needed, in order to influence behaviour not only in the sense of permanent advertising, would be algorithms that could be called “socially informed neural networks”. Such algorithms would allow behavioural adjustments to be made not only on an individual basis, but also on a social basis, e.g. when which means of transport would be most climate-friendly for a journey, depending on social behavioural patterns that follow from multiple cultural, economic, legislative, and normative underpinnings such as commuting.
The AI NAVI case studies serve as an empirical sandbox for such behavioural adaptations that come about with the help of AI algorithms. They are intended to create the possibilities for developing such “socially informed neural networks” by investigating both the behavioural changes influenced by AI systems and the social and individual repercussions of such use.
Case study countries
Greenhouse gas emissions per capita
Covid infections per 100,000 inhabitants
The project includes four case studies or respectively one case study in four countries, i.e. Germany, the US, the UK, and Australia. These case studies are intended to provide a background for the study of the research objectives and a specific behavioural domain for the experimentation and empirical research. They were chosen because of their similarity in standard of living, which promises to find common behavioural patterns or lifestyles, and their dissimilarity in terms of their reaction to climate change and the Covid19-pandemic. This may provide the opportunity to detect cultural components of the behavioural domain and the associated decision-making.
Case study phases
The case studies are divided into four phases, the first of which will only take place in Germany, while the following three phases will run in parallel in all four case study countries. Each of these phases is built around a workshop, the content preparation, conception and organisation of which is conditional on the work in the corresponding phase. The workshops in the different case study countries will take place close to each other towards the end of the phase, but not completely at the same time, in order to be able to react flexibly to results from the other case studies. The follow-up to the workshops held in each phase forms the starting point for the following phase – with the exception of the last one, which is primarily for project presentation and evaluation. Each of the phases is projected to last between 7 and 9 months, with a buffer that can be used as needed to respond to individual adjustments. These workshops will follow a “safe space”-concept that was developed by the partners in previous projects and will be adjusted to AI NAVI.
The first phase initially takes place only in the German case study, serves to lay the foundation for the further empirical work and is divided into two components, with the associated workshop planned as a workshop with two conceptual foci.
The first component consists of processing the research on the connection between lifestyles, consumer behaviour and previous behavioural adaptations on one hand and climate change and pandemics on the other. For this purpose, the research landscape will be examined with the help of desk research, central experts, such as climate researchers or virologists, will be identified, and a first preliminary mapping of the social interaction with complex systems and involved AI use will be carried out. This culminates in an expert workshop.
The second strand mirrors the first with a similar focus but reversed perspective and deals in particular with the psychological dimension of the relationship between lifestyles, consumer behaviour and behavioural adaptations, i.e. in particular the cognitive bases of behavioural influence. This strand will be combined with the first strand in the above-mentioned workshop in order to identify behavioural patterns and behavioural adaptations that are particularly suitable for investigation and on which the further phases can focus.
This is informed and extends on the work of the Corona+ module in the planning grant that already provided first insights into these relationships, but falls short of allowing the more extensive work in the full project.
Based on the more precise identification of appropriate behavioural patterns, Phase 2 will begin the actual research in all case study countries, which will amount to participatory workshops with different, heterogeneous stakeholders. The aim is to add a more general perspective to the expert perspective of the first workshop, to work out possible culture-specific differences and to conduct gamification and psychological experiments as well as include first inputs from the AI research in WP3.
Therefore, phase 2 is divided into three parts: First, the results from phase 1 will be processed, in particular to provide a demonstrator, or a “wizzard of oz” system, with which the connection between behavioural adaptations and AI applications can be investigated. In the second part, the exchange between the work packages takes place and the methodological preparation of the workshop with regard to the results from phase 1 is completed. The third part is the organisational preparation and conduction of the workshop, in which the connection between behaviour and the effects of complex systems with regard to the identified behavioural patterns is worked out by the workshop participants with the help of participatory systems mapping. Crucially, possible gaps are to be identified and the concrete handling of the prepared demonstrator system is to be evaluated in order to develop a more precise understanding of the possibilities for influencing behaviour. Furthermore, the workshop participants will not only be tested as users, but through the specific methodological approach, the participants’ ideas will also serve as an impetus for further AI and experimental research and elements of AI co-design and co-creation will be used.
In the course of the workshop the designed gamification and psychological experiments will be conducted, as well as further investigation into the complexity competences. This is done especially under the guidelines set by the primary research questions of the project, in particular the investigation of the cognitive foundations and the individual and social influence of active or passive AI use, the conditions of a complexity “sweet spot” and behavioural adaptation through the use of applications.
Phase 3 builds on the outcomes of the previous phase and begins to process the workshop, particularly in terms of the findings for the experiments in WP4 and the research in WP3. The possibilities for influencing behaviour from the previous phases are processed in this phase to identify possible intervention points that can be managed with the use of AI systems. This will be presented and adjusted in a further workshop with stakeholders mainly from NGOs, administration and politics.
In turn, specific possibilities for the use of such AI systems will be developed through systems mapping and other methodological approaches, which will further deepen the results from phase 2 and add a macro perspective to them.
Finally, Phase 4, in close cooperation with WP5, prepares the results of the previous workshops so that stakeholders and decision-makers from politics, industry and civil society are presented with concrete options for action to deal with such societal challenges, as well as which possibilities there are for using innovative AI-based solutions for behavioural adaptations in the context of “smart climate change” and “smart pandemics”. In a concluding workshop, the conditions for innovations will also be discussed and how they can be implemented in society – based on the research that has taken place on their cognitive and cultural foundations – or what resistance they may encounter.
Case study responsibilities
Johannes Gutenberg University Mainz
The Johannes Gutenberg University was founded in 1477 and is located in the capital of the federal state of Rhineland-Palatinate, where Johannes Gutenberg invented printing more than 500 years ago. Today, some 32.500 students, 10 percent from abroad, study at JGU (www.uni-mainz.de), making it one of Germany’s largest universities. With 75 fields of study and more than 260 degree courses, JGU offers an extraordinarily broad range of courses. JGU enjoys global eminence as a researchdriven university and regularly achieves solid positions in international research rankings. Successes in the Excellence Initiative of the German federal and state governments have confirmed JGU’s academic status. Annually, about 700 PhD students complete their studies at JGU. Another attribute of JGU is its research-oriented teaching – which incorporates research-based topics in the curricula early on. Similar emphasis is placed on promoting and mentoring young research talents. JGU also considers the exchange of knowledge with society as one of its key duties. As an open university, JGU offers the populace a unique portfolio of information dissemination concepts that extend far beyond the scope of standard popular academic formats. Through its system of university governance, JGU makes sure that its members participate in the strategic planning and that outstanding academics get involved.
Research expertise related to AI NAVI
Based in the Institute of Sociology is the Chair of Sociology of Technology and Innovation. With its attached Social Simulation infrastructure (TISSS Lab) it is engaged in the investigation of complex social systems. Analysing social phenomena around the production, the structures and the consequences of social innovations, helps to understand, describe and explain the complex dynamics and long-term effects of innovative change. For research, these complexity aspects require a computer-based lab research infrastructure, which supports a mix of quantitative and qualitative empirical methods combined with innovative methodological approaches from Computational Social Science such as social simulation. Especially, long term impact assessment of changes in interactional behaviour between stakeholders can be valuably addressed and investigated by such methodology.
Publications of institution
Ahrweiler, P., Frank, D., & Gilbert, N. (2019). Co-Designing Social Simulation Models For Policy Advice. In 2019 Spring Simulation Conference (SpringSim) (pp. 1–12). Tucson, AZ, USA, USA: IEEE. https://doi.org/10.23919/SpringSim.2019.8732901 (peer-reviewed publication published in July 2019)
Ahrweiler, P., Frank, D., & Gilbert, N. (2019). Co-Designing Social Simulation Models For Policy Advice. 2019 Spring Simulation Conference (SpringSim) Tucson, AZ, USA (peer-reviewed accepted conference paper)
Herget, F., Kleppmann, B., Ahrweiler, P., Gruca, J., & Neumann, M. (2021/22). How perceived complexity impacts on comfort zones in social decision contexts – Combining gamification and simulation for assessment. Journal of Artificial Societies and Social Simulation special SSC2021 issue (peer-reviewed, accepted, to be published in 2021/22)
University of Surrey / CRESS
The University of Surrey has excellent academics whose mission is to lead pioneering research and innovation to create new thinking around, and to provide practical solutions for, some of the world’s main technological challenges. It works in partnership with international academia, industry, policy makers and commerce. Innovative and dynamic, and with around 15,000 students, SURREY is the Times and Sunday Times University of the Year 2016. It also ranks fourth in the Guardian University Guide 2016 and eighth in the Complete University Guide 2016. In the 2015/2016 QS World University Rankings, it is awarded five stars, the highest rating achievable, and is placed within the top one per cent of global higher education institutions. Involved in EC projects for more than 25 years, including around 190 funded from the FP7 and ongoing Horizon2020 programmes, SURREY has extensive experience of acting as both coordinator and beneficiary. It excels at multidisciplinary and cross border research and benefits from excellent professional and administrative support. The Centre for Research in Social Simulation CRESS, headed by Professor Nigel Gilbert, is involved in a number of research projects applying simulation to areas such as environmental management, understanding value chains, the governance of science, web-based social networks, and basic research on modelling the evolution of social structure. It has a strong reputation in the methodology of and application of agent-based modelling. Its work has been supported by the European Commission through sixteen project grants over the past 14 years and also by grants from the UK Research Councils.
Research expertise related to AI NAVI
CRESS provides expertise in social simulation including Agent Based Modelling (ABM), qualitative and quantitative social science methods, complexity science and expertise in participatory modelling, systems thinking and co-production approaches to aid individuals and groups in their understanding of, interaction with and steering of complex adaptive systems. Our innova-tive participatory systems mapping (PSM) and subjective network analysis methods, allow be-spoke design of workshops, complex system representation and analysis for multiple stake-holder and system contexts and are extensively used by the UK government and diverse stakeholder groups. Our freely available PRSM software, https://prsm.uk/prsm.html, allows collaborative mapping and analysis workshops to be run online.
Publications of institution
Ahrweiler, P., Frank, D., & Gilbert, N. (2019). Co-Designing Social Simulation Models For Policy Advice: Lessons Learned From the INFSO-SKIN Study. In 2019 Spring Simulation Conference (SpringSim) (pp. 1–12). Tucson, AZ, USA, USA: IEEE. https://doi.org/10.23919/SpringSim.2019.8732901
Gilbert, N., Ahrweiler, P., Barbrook-Johnson, P., Narasimhan, K. P., & Wilkinson, H. (2018). Computational Modelling of Public Policy: Reflections on Practice. Journal of Artificial Societies and Social Simulation, 21(1), 14. https://doi.org/10.18564/jasss.3669
Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C. A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N., … Wilson, A. (2018). Computational modelling for decision-making: Where, why, what, who and how. Royal Society Open Science, 5(6), 172096. https://doi.org/10.1098/rsos.172096
Barbrook-Johnson, P., Badham, J., & Gilbert, N. (2017). Uses of Agent-Based Modeling for Health Communication: the TELL ME Case Study. Health Communication, 32(8), 939–944. https://doi.org/10.1080/10410236.2016.1196414
Kolkman, D. A., Campo, P., Balke-Visser, T., & Gilbert, N. (2016). How to build models for government: criteria driving model acceptance in policymaking. Policy Sciences, 49, 1–16. https://doi.org/10.1007/s11077-016-9250-4
Rowden, J., Lloyd, D. J. B., & Gilbert, N. (2014). A model of political voting behaviours across different countries. Physica A: Statistical Mechanics and Its Applications, 413, 609–625. Gilbert, N. (2007). A generic model of collectivities. Cybernetics and Systems: An International Journal, 38(7), 695–706.
Arizona State University
Arizona State University (ASU) is a public metropolitan research university on five campuses across the Phoenix metropolitan area and four regional learning centers throughout Arizona. ASU’s charter is based on the “New American University” model created by ASU President Michael M. Crow upon his appointment as the institution’s 16th president in 2002. It defines ASU as “a comprehensive public research university, measured not by whom it excludes, but rather by whom it includes and how they succeed; advancing research and discovery of public value; and assuming fundamental responsibility for the economic, social, cultural and overall health of the communities it serves.” ASU is one of the largest public universities by enrollment in the United States. As of fall 2019, the university had nearly 90,000 students attending classes across its metro campuses, more than 38,000 students attending online, including 83,000-plus undergraduates and more nearly 20,000 postgraduates. The university is organized into 17 colleges, featuring more than 170 cross-discipline centers and institutes. ASU offers 350 degree options for undergraduate students, as well as more than 400 graduate degree and certificate programs. The 2019 university ratings by U.S. News & World Report rank ASU No. 1 among the Most Innovative Schools in America for the fourth year in a row. Since 2005, ASU has been ranked among the top research universities in the U.S., public and private, based on research output, innovation, development, research expenditures, number of awarded patents and awarded research grant proposals. ASU is currently ranked among the top 10 universities—without a traditional medical school—for research expenditures. It shares this designation with schools such as Caltech, Georgia Tech, MIT, Purdue, Rockefeller, UC Berkeley, and the University of Texas at Austin. ASU is classified as “R1: Doctoral Universities – Highest Research Activity” by the Carnegie Classification of Institutions of Higher Education. The university is one of the fastest growing research enterprises in the United States, receiving $618 million in fiscal year 2018.
Research expertise related to AI NAVI
The Center for Smart Cities and Regions’ (CenSCR) mission is to advance urban and regional innovation to make more inclusive, vibrant, resilient and sustainable communities. CenSCR collaborates with researchers, policy-makers, planners, entrepreneurs, industry and the public to enhance the ability of cities and regions to responsibly use emerging technological infrastructures and improve quality of life. “Smart technologies” and “big data” have rapidly emerged as hoped for solutions to many of the challenges cities and regions face. Yet, there is often a disconnect between the efforts of technology innovators and the local needs and context of policy-makers and communities. Leveraging resources from across ASU, CenSCR bridges this gap between innovations in data, technologies and urban governance to develop anticipatory capacities and responsible innovation processes to create positive futures for cities, regions and their diverse communities. CenSCR generates ideas, methods, scenarios, networks and spaces for collaboration, engagement, educational programs and other research products to enable our partners to leverage technological innovation to create the urban and regional futures they want. The center serves as a living laboratory for ASU’s own efforts in creating a smart campus, with opportunities for undergraduate and graduate students to work with multi-disciplinary teams and cross-sectorial teams on real world problems, as well as providing continuing and professional education to city officials on innovation, entrepreneurship and governance.
SFIA / Dr. Alex Smajgl
Managing Director, Sustainable Futures Institute Australia (SFIA) & Mekong Region Futures Institute (MERFI), North Warrandyte 4810 Victoria, Australia
Dr Alex Smajgl is the Managing Director of SFIA and MERFI. His work is focused on natural resource management in the context of climate change adaptation involving highly participatory policy and planning approaches to effectively bridge research and policy. His project work involves the assessment of sustainable development and climate adaptation strategies, based on advanced integrated assessment modelling. He worked in many parts of Australia and Asia on climate change adaptation, the water-food-energy Nexus and the implementation of Sustainable Development Goals, which was largely funded by DFAT, USAID, CGIAR, ADB, World Bank, and GIZ. Most recently, several of his research projects delivered towards transboundary water management solutions for the GEF and FAO involving improved governance and innovative incentive mechanisms to improve the resilience of communities to climate change.
Prior to 2014, Dr Smajgl worked as a senior research scientist (and intermitted as research director) for the CSIRO in Townsville, Australia. He also established and managed offices in Jakarta, Indonesia, and Bangkok, Thailand. He coordinated large-scale participatory research projects on the water-food-energy Nexus in the context of climate adaptation, sustainability, resilience, poverty and environmental outcomes in Australia and in Southeast Asia (i.e. Mekong region, Indonesia). Scientifically, his work focused on testing participatory process designs, decision making processes, and integrated modelling to effectively link policy and research. His work in Australia focused on climate adaptation, water management, coastal ecosystems and livelihoods in the Great Barrier Reef region.
Prior to 2003, Dr Smajgl worked as a Research Associate/Assistant at the University of Münster where he developed macro-economic models to advise the European Commission and several German Ministries on climate policy outcomes, natural resource dynamics, energy security, and international trade. At the University he lectured Environmental Economics, Natural Resource Management, Microeconomics, and Computational Modelling in the context of Energy and Climate Change Economics. His PhD in climate change economics was funded by the VW Stiftung.
Relevant publications of institution
- Moallemi A., de Haan F. J., Hadjikakou M., Khatami S., Malekpour S., Smajgl A., Stafford Smith M., Voinov A., Bandari R., Lamichhane P., Miller K. K., Nicholson E., Novalia W., Ritchie E. G., Rojas A. M., Shaikh M. A., Szetey K., and Bryan B. A. 2021. Evaluating Participatory Modeling Methods for Co-creating Pathways to Sustainability. Earth’s Future, 9, e2020EF001843.
- Voinov, A., et al., 2018. Tools and methods in participatory modeling: selecting the right tool for the job. Environmental Modelling and Software, 109, 232-255.
- Smajgl A, Barreteau O, 2017. Framing options for characterising and parameterising human agents. Environmental Modelling and Software, DOI:1016/j.envsoft.2017.02.011.
- Smajgl, A., & Ward, J. (2015). Evaluating participatory research: Framework, methods and implementation results. Journal of Environmental Management, 157, 311-319.
- Smajgl, A., Foran, T., Dore, J., Ward, J., & Larson, S., 2015. Visions, beliefs and transformation: Exploring cross-sector and trans-boundary dynamics in the wider Mekong region. Ecology and Society, 20(2):15.
- Smajgl A, 2015. Simulating Sustainability: Guiding Principles to Ensure Policy Impact. Lecture Notes in Artificial Intelligence, 9086, 3-12.
- Hassenforder, E., Smajgl, A., Ward, J. 2015. Towards understanding participatory processes: Framework, application and results. Journal of Environmental Management, 157, 84-95.
- Bohensky E, Smajgl A, Brewer T, 2013. Patterns in household engagement with climate change in Indonesia. Nature Climate Change. DOI: 10.1038/nclimate1762.
 Human Development Index 2021 (figures for 2020)
 Climate Change Performance Index 2021
 Figures for 2018 based on (Ritchie & Roser, 2020)
 Based on official figures by the Johns Hopkins University September 13th, 2021
The Nobel prize winning Daniel Kahneman stated that “In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.” (Kahneman, 2011; p.35). This statement is also based on the assumption that our mind is based on two different types of thinking: system 1 and system 2 (e.g., Oaksford & Chater, 2020). Type or system 1 is the evolutionary older one. It is said to be fast, unconscious, automatic, effortless, and based on heuristics and beliefs. Type or system 2 thinking is the evolutionary younger one. It is slow, conscious, reflective, effortful and thus said to be the ‘rational’ system.
Humans are often seen as being more reliant on system 1, while artificial systems are of a system 2 type, i.e. rational and without errors, but much faster and more efficient than humans in certain domains. So, artificial systems can easily take over certain tasks and can do them much better than humans. This is highly appreciated in case of production issues, where machines can take over simple and monotonic tasks (for humans it is very difficult to be attentive over a longer period of time in monotonic tasks). But, the major problem is that the more tasks artificial systems take over, the more we ‘outsource’ everything that is connected to it. Finally, we totally rely on these systems, do not understand how they work (lay people as well as experts) anymore, and do not even know how to solve the works when the artificial systems are not available (e.g., if they break down). Thus, the more we ‘rely’ on artificial systems the less we are capable ourselves. In essence, our cognitive abilities may decline with an ‘overuse’ of artificial systems. This phenomenon is nowadays known as ‘digital dementia’ (e.g., Spitzer, 2012).
Therefore, what we need to do is to find the right balance between simple (passive) ‘AI use’ and the use of AI systems (actively) in order to ‘generate knowledge’. The latter is what we need to focus on in the future. Artificial systems are built and used to facilitate our lives (e.g., production) but it would be fatal to totally transfer reasoning and decision making to AI systems.
Returning to the Kahneman quote that ‘laziness is built deep into our nature’ is somehow acceptable but it may not serve as an excuse to not mentally engage with what artificial systems do for us in everyday life but rather to accept them as ‘interactive partners’ who can facilitate our lives without being overly reliant.
So, whenever we address human thinking, we should keep this (abstract) differentiation of type 1 and type 2 thinking in mind. It becomes even more important when we want to make up parallels to ‘machine thinking’ and AI. Machines usually work on a binary level, they differentiate between ‘right’ or ‘wrong’ (i.e. system 2), whereas humans make their decisions most often based on experiences, beliefs, prejudices, and the like (i.e. heuristics and probabilities; system 1). Even though this way of seeing human thinking as two-fold is attractive, it has also its weaknesses (e.g., Varga & Hamburger, 2014; Oaksford & Chater, 2020). But, addressing human thinking and machine thinking in this fashion with regards to AI-NAVI is also too simplified, because human (individual) thinking is always an issue of social interaction, since thinking rarely takes place in (social) isolation.
Thus, human thinking (i.e. reasoning and decision making) need to be seen as a social process as well (e.g., Reis, 2020). Sometimes several cognitive agents sharing information/representations can come up with better decisions than individuals (e.g., Hutchins, 1991, 1995a) and often they also interact with artificial systems as socio-technical systems (Hutchins, 1995b).
Discussing information with others also influences our mental representations (e.g., mental model theory by Johnson-Laird, 1983, and Johnson-Laird & Byrne, 1991). This, in turn, again has influences on the way we interact with others and with artificial systems. Thus, Cognitive Psychology in general needs to shift its focus towards interaction.
In order to systematically investigate the above issues, the following psychological experimentation is to be realized:
- Active passive use of information in individuals and groups: Differences in decisions/mental representations of individuals based on the instruction in learning experiments; i.e. whether to engage actively (generating knowledge) or passively (just using given information); à such experiments have already been realized in the planning phase in Spatial Cognition with individuals (thus, we know that they work and can be adapted to other topics)
- For proper interaction with AI systems, we need to understand individual cognitive abilities and cognitive limitations (e.g., mental models; belief biases; working memory capacity), possibly the abilities and limitations of groups as well: Lab experiments on how mental models/mental representations are affected/changed when thinking takes place in the social context (dyads or groups of three or four people); here, also experts could be integrated as a source of information in discussions (can ideally be realized in form of reasoning experiments; see also Reis, 2020 from our research group);
- Motivational factors (intrinsic and extrinsic) and incentives in individuals and groups (is it always necessary for the individual to have individual benefits or is it also acceptable just to ‘see’ beneficial effects for others or society?; consideration);
- Attribution of responsibility; à in preliminary experiments on Spatial Cognition (i.e. wayfinding), we found that when people interact with AI systems, many of us are tempted to attribute errors or false decisions (e.g., wrong turns) to the artificial systems. Thus, it is important to communicate that most of these systems are only supposed to provide us with the best available information but that the decision is still to be made by humans (with the possibility to overwrite the suggestion by the system).
Conclusion: Climate change is an issue that is relevant for every individual, no matter whether it is addressed/seen on the individual level or on a social/societal level. However, the problem of climate change can only be solved on the socio-technical level, i.e. humans and AI systems working together in order to save the planet.
Hutchins, E. (1991). The social organization of distributed cognition. In L. Resnick, J. Levine, and S. Teasley (Eds.), Perspectives on socially shared cognition (pp. 283-307). Washington, D C: American Psychological Association.
Hutchins, E. (1995a). Cognition in the wild. MIT Press.
Hutchins, E. (1995b). How a cockpit remembers its speed. Cognitive Science, 19, 265–288.
One of the motives that significantly shapes the research in AI NAVI is what one could call the micro-macro problem. The social is, of course, an emergent phenomenon of individual behaviour, which is why a research project like AI NAVI must focus on precisely such individual behaviour. However, this raises a scaling problem: How does individual behaviour transfer to the big picture? How do you identify “the footprint” of society in individual behaviour?
The opposite approach, however, is also not expedient: it is precisely the turn towards cultural anthropological approaches in the humanities and social sciences, for example in micro-sociology, the history of everyday life or the humanities “from below”, that show that it is very difficult to draw conclusions about individual behaviour from a macro perspective.
Thus, at the beginning of these theoretical approaches is the paradoxical observation that classification patterns that are supposed to be able to explain the possibilities of action in a society are often incomprehensibly opposed to overcoming the macroscopic phenomena from which these classification patterns are supposed to feed. The Marxian theory of class antagonisms, for example, which is supposed to explain the limited possibilities of action of the working class of the 19th century, cannot theoretically grasp the reaction of the very people who are attributed to the working class within the framework of the theory. The explanation of global events, which is supposed to explain individual behaviour, cannot explain its repercussions. Even more: Marxian theory is not only an external phenomenon of the processes taking place in the late 19th century, but as the intellectual basis of the organised labour movement, it is an essential part of the processes.
This observation established the approach of a cultural anthropological perspective and a pars pro toto investigation in the social and historical sciences. Global analyses are no longer written, but the possibilities of action of individuals are examined in order to find an insight into the possibilities of action of social actors from these. Instead of the global, the focus is increasingly on the particular.
But even such an approach may fall short in the age of big data and globally operating digital companies. How people affected by drought in South Africa react to climate change is not independent of the possibilities of “smart climate change”, but it does not reveal enough about the necessary preconditions for such applications.
Therefore, AI NAVI is based on a more integrative approach, which is to be represented with the metaphor of a triangle. Those phenomena, i.e. in particular behavioural patterns, which occur in AI NAVI are located in a field of tension of the three dimensions “social”, “individual” and “technical”, which interact with each other: Individual behaviour in its totality constitutes social behaviour. Conversely, the collective patterns of interpretation, behavioural norms or epistemes provide the framework within which individual behaviour can move. The technical dimension influences people’s individual behaviour, especially in the context of AI NAVI, in which the technical dimension occurs in particular in the form of personal assistants. “Smart climate change”, however, is by no means intended to influence individual behaviour as individual behaviour, but as a component of macroscopic social behaviour. Thus, all three dimensions are in a constant state of tension within the framework of AI NAVI.
Artificial intelligence plays a dual role in AI NAVI. On the one hand, AI systems are studied and used to master complexity. On the other hand, AI systems contribute to complexity.
One of the keywords that has come up again and again in recent years in relation to changes in the political landscape is that of filter bubbles. The metaphor of the filter bubble, or the so-called filter bubble theory, was introduced in 2011 by the media scientist Eli Pariser (Pariser, 2011) and claims a connection between the recommendation algorithms of large digital media and increasing radicalisation. For example, the algorithms used by Google, YouTube or Facebook to suggest what content users might also be interested in would, as a result of an interest in certain topics, offer users increasingly radical content on the same topic. For example, if someone was interested in whether the moon landing was real, content would increasingly be suggested that dealt more and more intensively with the proof that it was a conspiracy.
The difficulty of the filter bubble metaphor
The difficulty with this statement, other than merely suggesting the phenomenon, is actually proving it. While the basic intuition cannot be dismissed, a more detailed examination of the phenomenon presents great difficulties. In fact, it often seems that it is not primarily the AI algorithms, i.e. the recommender systems, that lead to radicalisation, but a combination of the business goals and these recommender systems that lead to content radicalisation.
In particular, a significant change in the recommender system in 2015 on the video platform YouTube seems to have contributed to a radicalisation of content. In doing so, YouTube changed the weighting of different aspects of the rating of a video. While previously the trustworthiness of the producer was of great importance, in that subscriber numbers of the channel were rated highly, with the innovation the importance of subscriber numbers was reduced and in contrast the so-called click-through-rate of videos was weighted higher, i.e. the concrete user behaviour with this specific video, for example how often users do not watch the video to the end. This led to a significant change in which videos penetrated to many users. Conversely, this created an evolutionary pressure to adapt, so that content that could adapt to the new conditions penetrated users, while content that could not adapt to the new conditions was increasingly marginalised.
The changes were, of course, influenced by YouTube’s business goals of keeping users on YouTube as long as possible. A side effect was that the recommender systems now gave high ratings to content that was particularly shrill and emotionalising, but presented within 10 minutes and thus heavily abbreviated. It was not the specific preference for obscure or marginal content, but the preference for its style of making that brought many such marginal media creators into focus. Naturally, a biotope of other media creators then developed who wanted to profit further from the success of individual videos and media creators and thus gave the particularly obscure topics a further resonance space.
It therefore seems that it is by no means only certain functionalities of the underlying algorithms that promoted the phenomenon of radicalisation, but primarily certain business practices that contributed to an economic environment in which the provision of suitable content becomes an economic necessity for its producers. One might think of the phenomenon of search engine optimisation (SEO) experts, where people specialise in advising companies or organisations on how to prepare their content on websites in such a way that it is perceived as particularly important by Google crawlers and page-rank algorithms.
Thus, the repercussions of the new media on society are also effects of a complex system in which the technical functions of evaluation algorithms, user behaviour and especially the business goals of the providers produce emergent phenomena such as a clear radicalisation of the political landscape.
AI as a source of complexity in AI NAVI
AI systems do not primarily appear in AI NAVI in the form of such recommender or rating systems. However, this does not at all mean that the feedback effects of AI systems, their user behaviour and the underlying intention of their use do not also lead to undesirable emergent effects. Applications such as the Corona-Warnapp or systems for behavioural adaptation always have side effects on the behaviour of their users through the effect of being embedded in social reality.
This also applies to AI systems in AI NAVI, which become a source of complexity. Smart applications for climate change thus contribute to dealing with climate change not only directly, but also indirectly through the increase in complexity. This role of AI is always taken into account in the research design of AI NAVI and flows significantly into the conception of the “socially informed neural networks”.
AI as a solution to complexity
Provided that the effect of AI algorithms on increasing complexity is also taken into account, AI research on personalised applications can also make a significant contribution to making the complexity of systems such as the climate or pandemics more manageable for the individual. Exploring this possibility is one of the central goals of AI NAVI. However, implementation requires answering central questions such as what role active or passive use of AI plays.
The central example of this is navigation systems, which on the one hand relieve their users of the need to carry out elaborate research for the exact route to a certain place, but conversely thus hand over to the machine the possibilities for action that would come from their own creative activity. The way in which people should deal with complex systems with the support of AI should therefore not follow a merely passive use, so that users do not turn right at the wrong place.
Pariser, E. (2011) The Filter Bubble: What The Internet Is Hiding From You. Penguin Press Limited, New York
Gamification is a method of social research that is becoming increasingly widespread. As the name suggests, it is an attempt to pack certain phenomena into a game in such a way that they become accessible to social science research.
In principle, social science research is subject to a certain micro-macro problem. While the object of the social sciences is the social, in contrast to psychology, for example, which focuses mainly on the individual psyche, the social is not accessible to direct investigation. Sociology essentially makes do by making individuals the subject of its investigation and gaining an insight into the social component of each individual by generating sufficient data, or it categorises macro-phenomena and orders by proceeding historically-analytically. Both approaches proceed qualitatively as well as quantitatively, but have the equal difficulty that the social dimension lies hidden behind their data, so to speak, and must first be unearthed through social science work.
Gamification offers a way out of this difficulty. Groups of people playing together appear in these games not only as a group of individuals, but also as a unit, each with specific characteristic properties. Conversely, however, they are also in a sense separated from society as a whole by the controlled environment of being embedded in the game as players. The social macro-phenomena can reveal themselves in the game behaviour, but they do not explicitly determine the game behaviour. The social role that a player plays “in the real world” does not have to be fully expressed in his or her gaming behaviour. Conversely, however, the social footprint is expressed in the game as game behaviour, i.e. epistemological basic ideas, values, aspects of the collective psyche and also the social order.
This makes it possible to use games as a laboratory for social research. The controlled environment of a game allows for an explicit demarcation from the social environment, but at the same time it allows for research into the group of players as a social entity.
Gamedesign in AI NAVI
This makes the purposeful development of games an important methodological procedure in social research. Research in the social laboratory “game” must be suitable for actually producing those phenomena that are to be researched.
Already within the planning grant of AI NAVI, two games in particular have been developed that should allow to shed more light on research questions within the framework of AI NAVI: the Party Game and the Corona Game.
The party game is relatively simple and primarily designed for the co-creative aspect by the players. The foundation is simple: players find themselves at a party where they randomly end up at different tables. Based on a rule, they either feel comfortable at that table and want to stay there or they don’t feel comfortable and want to leave the table. The rules for feeling comfortable have been pre-determined by the players and are meant to have a gradient of difficulty. For example, the initial rule that the number of table neighbours of the opposite sex determines well-being has been supplemented by the players with a rule stating that anyone standing next to a person with white socks feels uncomfortable, as well as a rule stating that dice are rolled to determine who feels uncomfortable. A total of six rules were designed by the players, which became increasingly complicated and displayed more complex behaviour of the group of players.
Before and after the game, the players were asked to rate how “complex” they found the game, and additionally to rate their “satisfaction” with the game after the game. The aim of the game was to let the players explore for themselves, so to speak, how the complexity of the gameplay related to their satisfaction. In addition, a marginal aspect was that the players were not asked about their assessment of the difficulty or complexity of the game, but in particular about their assessment of the complexity of the game. This gave the players an insight into their specific understanding of how complex they felt the game rounds to different rules.
It is precisely this second aspect that reveals a strength of the gamification approach: the empirical calibration of descriptions. In contrast to the approach of predefining terms lexically, players’ assessments of a game are asked for and compared with the data obtained from the games.
Agent Based Modelling
This aspect can be deepened even further by additionally reenacting the game action digitally. In fact, the party game originates from the so-called party simulation, one of the most elementary ABMs.
ABM is a form of computer simulation that represents a further development of the concept of cellular automata. ABMs consist of agents that interact with each other on the basis of predefined rules. In the process, patterns develop that make it possible to investigate the emergence of phenomena in the interaction of the agents. A natural field of application for such ABMs is therefore, for example, innovation networks, as in the SKIN model, where conclusions can be drawn from the interaction of stakeholders within such innovation processes as to the conditions under which innovations are successful.
At the same time, ABMs also allow an interaction to be analysed algorithmically. For example, the rules of the party game were implemented in an ABM and at the same time analysed in terms of measures of algorithmic complexity and these numbers were linked to the players’ assessments. This resulted in correlations between algorithmic complexity metrics and the players’ assessments. This allows for a more vivid understanding of the relatively vague notion of complexity that emerges from the players’ survey.
The sweet-spot and more complex games
The actual aim of the party game was to identify the sweet-spot phenomenon, i.e. whether there is a comfortable level of complexity. The original hypothesis was that very little complexity is boring and unexciting for players, so they tend to increase the complexity they are exposed to. At a certain point, however, the complexity becomes overwhelming and players begin to feel uncomfortable with the complexity they are exposed to.
This hypothesis was confirmed in an interesting way in the first exploratory trials of the game. The assumed U-shape of the complexity-satisfaction curve did not appear in the data. Rather, the data rudimentarily revealed an inverted “W”. While the basic assumption that both too little and too much complexity was unattractive to players, there was an additional slight drop in satisfaction near medium complexity. Even being exposed to “normal” complexity is not necessarily a particularly satisfying state for players.
This first game, however, served primarily to test the basic approach of combining gamification and social simulations. The game is still too simple for a closer exploration of complex dynamics in social interaction. In particular, the difficulty in distinguishing complexity from mere intricacy may have entered the data as an artefact of player assessment. This makes it necessary to develop the approach further. There are three main directions in which the gamification approach has already begun to evolve.
1. more complex games
2. the use of AI to analyse algorithmic complexity
3. learning agent based models
The corona game
As part of the planning grant, another game has already been designed and tried out for the first time: the Corona Game. The Corona Game is a game in which a community of players must maintain their daily lives. The players have a number of options for action: they can work or go to school, they can buy products in the supermarket, do banking business, decide on measures in the town hall or spend their time in a lounge. However, there is a pandemic in the game community and the corona virus is going around. The players must therefore master a pandemic situation with their game behaviour, but at the same time everyday necessities such as earning money or running errands must still be taken into account.
The game is used to examine patterns of cooperation in a tense situation. Thus, the players can decide on socio-economic assistance such as minimum wages, health insurance, vaccination development funds or a social welfare system. At the same time, they can also earn extra money and protect themselves by buying shares and products such as disinfectants.
The game is based on a comparative cultural approach, which makes it possible to analyse the game in terms of culturally typical approaches to cooperation. By combining the game with another game, the value auction, in which the players can bid for values, it is possible to examine game behaviour with value concepts and the establishment of collective strategies for coping with crises. The first test games have already shown correlations between values and game strategies.
AI & ABM
The corona game and the party game differ fundamentally in the way complexity occurs in them. The corona game is open and explores the formation of complex cooperation patterns without a clear end state. The party game, on the other hand, has a clear end state that all players at their table are satisfied and explores the complex patterns that emerge along the way.
Especially the latter allows the use of and analysis by means of AI algorithms. Already in the first analyses, neural networks were trained to understand the rules of the party simulation. The learning curve of the neural network thus allows conclusions to be drawn about the learning behaviour of human players and thus provides initial insights into the cognitive challenges of dealing with a simple phenomenon: players move or do not move. The AI has to find out why they move or don’t move in order to be able to judge the game.
Another approach in which AI systems and ABM are coupled in AI NAVI are so-called Learning Agents Based Modelling (LABM) systems, in which the rule-based interactions between the agents are replaced by interactions based on reinforcement learning. These LABM are particularly interesting in the case of games with a competitive aspect: the individual agents have clearly defined actions and a clear goal. It remains unclear how they achieve this goal and not all agents can achieve their goal at the same time. The concrete behaviour then results from trial-and-error and reinforcement learning heuristics. The agents try out behaviour and learn from it which behaviours are more promising than others. The behaviour learned by the other agents up to a certain point also serves as a basis for the further development of each agent’s game strategy. This makes it possible to more clearly grasp and analyse the evolution of complex game strategies that also arise in games with humans, as well as to investigate tipping-point phenomena in which a specific and more or less random situation is responsible for the development of a more fundamental subsequent situation.
Complexity is a word that has been heard more and more in recent years. When it comes to characterising populism or conspiracy theories, for example, it is often said that they provide simple answers to complex questions. But scientists also talk about complex systems in many areas, such as climate change or the ecosystem, or describe the infection events in the Covid19 pandemic as a complex system. Sometimes you get the feeling that complexity is used as a buzzword to which everything is shifted, which overwhelms people. But that would fall short. Unfortunately, complexity is also a complex issue.
As with many such terms, there is no clear and unambiguous lexical definition that is completely uncontroversial. Nor is it desirable to try to anatomically neatly dissect such phenomena as complexity and put them in a drawer that can then be put on the shelf of classified phenomena. Conversely, this does not mean that complexity is something onto which everything can be projected.
Complexity is a phenomenon that is very closely related to ideas of pattern formation in an object area in which the patterns develop a kind of life of their own as emergent phenomena. Just as a building complex is more than a mere juxtaposition of buildings and many of the interactions between the buildings are only made possible by their respective specific embedding in the building complex, many other complex systems are also characterised by interactions between their components that give the impression of creating something new.
Complexity in AI NAVI
Complexity is one of the central motifs in AI NAVI. It forms the framework for what AI NAVI wants to examine in concrete terms. In this context, complexity occurs on the one hand very generally and on the other hand very specifically, so that what lies between the very general and the very specific can be better investigated.
The general form in which complexity occurs in AI NAVI is the fundamental engagement with complexity. Complexity poses a cognitive and epistemic challenge to societies. Often, complex systems are characterised not only by pattern formation and emergent phenomena, but also by a certain lack of clarity. This also makes them a cognitive challenge that must be taken into account in collective as well as individual decisions on action. Norms or laws, as collective patterns of action or behaviour, must take into account the interactions of a complex system in order to be meaningful. At the same time, however, individual behavioural adaptation in everyday social life requires that we develop a sense of the complex interactions that we cannot capture through rules of behaviour. The most obvious example of this is social exchange and the double contingency it contains.
Not only must the way we make contact with others follow some kind of pattern, such as phonological patterns of words in a language, in order to be understood at all, but there must also be a basic cognitive attitude that makes communication possible in the first place: I must assume, for example, that my counterpart assumes that I assume … that the act of communication conveys meaning. The problem of double contingency produces an infinite regress of meaning and meaning attribution, which is a phenomenon that is as complex as it is everyday. Understanding meaning and ascribing meaning are basic motifs of individual behaviour to which we are exposed on a daily basis.
The more specific form in which complex systems appear in AI NAVI is the study of how to deal with concrete complex systems that impact our societies in socially disruptive ways: climate change and pandemics. Both phenomena are complex phenomena. The biochemical process of mutation of a coronavirus, as seems to have occurred at the end of 2019, and the biochemical interaction with human organisms, does not already explain the entire dimension of the global pandemic of the past two years. At the same time, however, one can refrain from these biochemical processes when discussing masking and vaccination obligations, attempts to stabilise a stumbling global economy or a country’s travel regulations. Rather, the pandemic only emerges in the interaction of all these processes. The Covid19 pandemic cannot be understood without the background of a global economy and a globalised world that is suddenly confronted with the above-mentioned biochemical processes.
The situation is similar with climate change. It goes without saying that the basic driver of climate change is the physical property of greenhouse gases that light of certain wavelengths is reflected by the gases, while other wavelengths are not. But it is equally clear that the mechanism of climate change cannot be explained without the man-made dimension, that the forms of production that have prevailed since the industrial revolution are based on the emission of precisely such greenhouse gases. Only the combination of the physical greenhouse effect and the socio-economic effect of industrialisation is able to provide a plausible basis for climate change.
Climate change and pandemics are thus archetypes of socio-natural complex systems in which every decision humans make to deal with the challenges these phenomena pose has an impact on the phenomena. More to the point, the phenomena are in many ways essentially shaped by human decisions and behaviour.
AI NAVI aims to combine these two perspectives: The general perspective, which tries to grasp the fundamental aspects of complexity, with a specific perspective, which tries to connect and examine the concrete events in the confrontation with concrete examples of complex systems.