The Nobel prize winning Daniel Kahneman stated that “In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.” (Kahneman, 2011; p.35). This statement is also based on the assumption that our mind is based on two different types of thinking: system 1 and system 2 (e.g., Oaksford & Chater, 2020). Type or system 1 is the evolutionary older one. It is said to be fast, unconscious, automatic, effortless, and based on heuristics and beliefs. Type or system 2 thinking is the evolutionary younger one. It is slow, conscious, reflective, effortful and thus said to be the ‘rational’ system.
Humans are often seen as being more reliant on system 1, while artificial systems are of a system 2 type, i.e. rational and without errors, but much faster and more efficient than humans in certain domains. So, artificial systems can easily take over certain tasks and can do them much better than humans. This is highly appreciated in case of production issues, where machines can take over simple and monotonic tasks (for humans it is very difficult to be attentive over a longer period of time in monotonic tasks). But, the major problem is that the more tasks artificial systems take over, the more we ‘outsource’ everything that is connected to it. Finally, we totally rely on these systems, do not understand how they work (lay people as well as experts) anymore, and do not even know how to solve the works when the artificial systems are not available (e.g., if they break down). Thus, the more we ‘rely’ on artificial systems the less we are capable ourselves. In essence, our cognitive abilities may decline with an ‘overuse’ of artificial systems. This phenomenon is nowadays known as ‘digital dementia’ (e.g., Spitzer, 2012).
Therefore, what we need to do is to find the right balance between simple (passive) ‘AI use’ and the use of AI systems (actively) in order to ‘generate knowledge’. The latter is what we need to focus on in the future. Artificial systems are built and used to facilitate our lives (e.g., production) but it would be fatal to totally transfer reasoning and decision making to AI systems.
Returning to the Kahneman quote that ‘laziness is built deep into our nature’ is somehow acceptable but it may not serve as an excuse to not mentally engage with what artificial systems do for us in everyday life but rather to accept them as ‘interactive partners’ who can facilitate our lives without being overly reliant.
So, whenever we address human thinking, we should keep this (abstract) differentiation of type 1 and type 2 thinking in mind. It becomes even more important when we want to make up parallels to ‘machine thinking’ and AI. Machines usually work on a binary level, they differentiate between ‘right’ or ‘wrong’ (i.e. system 2), whereas humans make their decisions most often based on experiences, beliefs, prejudices, and the like (i.e. heuristics and probabilities; system 1). Even though this way of seeing human thinking as two-fold is attractive, it has also its weaknesses (e.g., Varga & Hamburger, 2014; Oaksford & Chater, 2020). But, addressing human thinking and machine thinking in this fashion with regards to AI-NAVI is also too simplified, because human (individual) thinking is always an issue of social interaction, since thinking rarely takes place in (social) isolation.
Thus, human thinking (i.e. reasoning and decision making) need to be seen as a social process as well (e.g., Reis, 2020). Sometimes several cognitive agents sharing information/representations can come up with better decisions than individuals (e.g., Hutchins, 1991, 1995a) and often they also interact with artificial systems as socio-technical systems (Hutchins, 1995b).
Discussing information with others also influences our mental representations (e.g., mental model theory by Johnson-Laird, 1983, and Johnson-Laird & Byrne, 1991). This, in turn, again has influences on the way we interact with others and with artificial systems. Thus, Cognitive Psychology in general needs to shift its focus towards interaction.
In order to systematically investigate the above issues, the following psychological experimentation is to be realized:
- Active passive use of information in individuals and groups: Differences in decisions/mental representations of individuals based on the instruction in learning experiments; i.e. whether to engage actively (generating knowledge) or passively (just using given information); à such experiments have already been realized in the planning phase in Spatial Cognition with individuals (thus, we know that they work and can be adapted to other topics)
- For proper interaction with AI systems, we need to understand individual cognitive abilities and cognitive limitations (e.g., mental models; belief biases; working memory capacity), possibly the abilities and limitations of groups as well: Lab experiments on how mental models/mental representations are affected/changed when thinking takes place in the social context (dyads or groups of three or four people); here, also experts could be integrated as a source of information in discussions (can ideally be realized in form of reasoning experiments; see also Reis, 2020 from our research group);
- Motivational factors (intrinsic and extrinsic) and incentives in individuals and groups (is it always necessary for the individual to have individual benefits or is it also acceptable just to ‘see’ beneficial effects for others or society?; consideration);
- Attribution of responsibility; à in preliminary experiments on Spatial Cognition (i.e. wayfinding), we found that when people interact with AI systems, many of us are tempted to attribute errors or false decisions (e.g., wrong turns) to the artificial systems. Thus, it is important to communicate that most of these systems are only supposed to provide us with the best available information but that the decision is still to be made by humans (with the possibility to overwrite the suggestion by the system).
Conclusion: Climate change is an issue that is relevant for every individual, no matter whether it is addressed/seen on the individual level or on a social/societal level. However, the problem of climate change can only be solved on the socio-technical level, i.e. humans and AI systems working together in order to save the planet.
Hutchins, E. (1991). The social organization of distributed cognition. In L. Resnick, J. Levine, and S. Teasley (Eds.), Perspectives on socially shared cognition (pp. 283-307). Washington, D C: American Psychological Association.
Hutchins, E. (1995a). Cognition in the wild. MIT Press.
Hutchins, E. (1995b). How a cockpit remembers its speed. Cognitive Science, 19, 265–288.