RTG 2175 Perception in Context and its neural Basis
print


Breadcrumb Navigation


Content

TP Geyer/Müller: Modelling cognition my means of evolutionary mechanisms

PhD student Werner Seitz:

The Interface Theory of Perception (Hoffman, Prakash and Singh 2015) suggests that perception is a product of evolution and therefore not optimized to depict the "true nature" of the world, but rather to maximize our chances of survival and reproduction. Hoffman et al. (2015) argue that perception most likely hides the appearance of a "thing-in-itself" and simplifies, transforms it into the most convenient, and not the most truthful representation. This view on perception can be easily extended to other cognitive functions, such as attention, long-term memory, self-consciousness, or abstract thoughts and feelings and this principle is not only true in the metaphysical sense: Our cognition and its structure have evolved to ensure our survival and not to show us the truth: In their book "Cognitive Computational Neuroscience", O'Reilly and Munakata (2012) argue that the "The foundations of cognition are built upon the sensory-motor loop - processing sensory inputs to determine which motor action to perform next. This is the most basic function of any nervous system".

In this new RTG project, we advocate a view of cognition as having developed from "un-cognitive", more simple central nervous systems. This allows us to tackle novel questions, which go beyond current computational approaches in cognitive neuroscience and artificial intelligence research. On the one hand, we will model increasingly "cognitive" systems from a canonic sensorimotor loop, particularly related to visual search, such as a well-established task of looking for a target object in a cluttered array of distractor objects On the other hand, we will build computational models based on theoretical considerations and test whether these models can explain neuronal functioning, instead of conventionally attempting to reverse-engineer neural structures and derive explanations for cognitive phenomena. With this approach, we aim to gain insights into rich representations, intuitive physics and commonsense reasoning (Lake et al., 2017) - currently, insufficiently solved and heavily debated topics in AI research.