Project manager: Fred Mast
The aim of this project is to further investigate the mechanisms shared by mental imagery and visual perception. A wealth of behavioral evidence has been obtained since the 1970-ies about how people use different types of mental imagery abilities, and how they relate to perception. Research in cognitive psychology has provided plenty of empirical demonstrations showing that mental images can facilitate or interfere with the process of visual perception. Moreover, a wealth of neuroimaging research has been ascertained showing that brain activation during mental imagery and visual perception overlaps widely. The interpretation of these findings gave rise to intense discussions (“Imagery-Debate” in cognitive psychology) about the underlying format of representation. In this proposal we pursue a novel approach to explore the relation between mental imagery and perception: perceptual learning. Specifically, we study whether perceptual learning can be influenced by mental imagery, and vice versa, how perceptual learning can influence mental imagery abilities. To our knowledge, we were the first to demonstrate that perceptual learning is still possible by mentally imagining the missing sensory stimulus (Tartaglia, Bamert, Mast & Herzog, 2009, Current Biology). That is, we provided evidence for perceptual learning without perceptual input, which contradicts a purely stimulus-driven approach to perceptual learning. Subsequent experimentation in our labs provided further empirical support for perceptual learning via mental imagery. Still, however, we know relatively little about the mechanisms that underlie learning via mental imagery. Three approaches will be pursued in this application. In Subproject 1 we will need to run several psychophysical experiments, in which we will test paradigms, which are known to induce or not to induce perceptual learning (e.g., no perceptual learning occurs when two different but spatially overlapping bisection stimuli are interleaved from trial to trial, so called roving). It will then be possible to compare the perceptual learning tasks with mental imagery and see whether learning via imagery follows the same principles such as the absence of transfer from one retinal location to another. In Subproject 2, we will investigate the influence in the reverse direction, and see whether perceptual learning will transfer to mental imagery abilities. This would imply that imagery representations are – at least partly – activated during perceptual learning. It is known that individual mental imagery abilities vary a lot between individuals but – to our knowledge – no study has yet isolated the factors determining the improvement of mental imagery skills. Given the fact that previous research has provided firm evidence for an the involvement of early brain areas involved in mental imagery, the question arises whether existing top-down models in computational neuroscience can account for the effects of mental imagery training (e.g., the signal-to-noise ratio in the low-level representation will improve as a result of a strengthened top-down signal). In Subproject 3, a computational model of the bisection discrimination will be developed and fitted to the experimental data gathered in Subprojects 1 and 2. Computational modeling in combination with experimental findings yield yet another important question, which is the distinction between imagery and perception. Using EEG will test whether cortical theta oscillations separate between activation caused by mental imagery and activation caused by visual perception. The outcome of the EEG experiments will help to distinguish between flag theory (mental images are labelled) and readout (distinction at the lower level representation). The experiments described in this proposal will help to explore the mechanisms that underlie mental imagery, how imagery relates to perception, and how computational neuroscience can be used to develop a neurofunctional theory of a mental imagery.