Corpus - Corpus, données et outils de la recherche en sciences humaines et sociales

Megastudies of visual and spoken word recognition – MEGALEX

Megastudies of visual and spoken word recognition.

The main goal of this project is to better understand the cognitive processes underlying visual and spoken word recognition in adults.

Understanding factors at play in visual and spoken word recognition.

The main goal of our project is to better understand the cognitive processes underlying visual and spoken word recognition. To date, nearly all research has been based on small studies involving a limited set of monosyllabic words, with a limited number of lexical variables. The present project aims to supplement previous studies with a new approach, the megastudy approach. This new approach will test a huge number (tens of thousand words) of complex words.<br /><br />Our first goal is to apply the psychophysical approach developped by Keuleers et al. (2012) to French (visual lexical decision) on 28 000 words and 28 000 pseudowords with a group of 100 participants running 20 hours of experiment. Thanks to this mega-corpus, we will provide answers to some important unresolved theoretical issues in the field of visual word recognition. Collected reaction times will be submitted to multiple regression analyses (linear mixed effects) in order to study the influence of continuous lexical variables that used to be treated as categorized in factorial designs.<br /><br />Our second goal is to collect reaction times on the same number of words and pseudowords, in a modality never tested before at such large-scale, namely the auditory modality. Megastudies are virtually nonexistent in auditory word recognition research. It is therefore crucial to provide and explore an auditory analogue of what has been already done in visual word recognition. <br /><br />Thanks to these two megastudies, we will compare the similarities and differences between the visual and auditory modalities and we will identify the relative strength of lexical variables influencing performances in these two modalities.

We will use the megastudy approach. This approach tests a huge number of words and pseudowords (around 56 000) on a small number of participants (about 100).

The task used in both megastudies is the lexical decision task (a classical task in the field of psycholinguistics).

In this task, participants receive words and pseudowords presented randomly and they have to decide (via a gamepad) as rapidly and accurately as possible whether the stimulus (presented visually or aurally) is a French word or not. We measure reaction times and percent errors.

In our project, we will test two lexical decision tasks, one in the visual modality (reading), and the other one in the auditory modality (speech perception).

In progress.

The results of our project will have a significant theoretical impact on models of visual and spoken word recognition. Besides, these results will allow a better understanding of the lexical variables at play in the two modalities. Furthermore, the project will allow us (1) to conduct virtual experiments and (2) to test the robustness of experimental effects obtained with the factorial approach. Finally, these results will be fundamental for models of word recognition.

In progress.

For more than a century, researchers in psycholinguistics, cognitive psychology, and cognitive science have tried to understand the mental processes underlying visual and spoken word recognition (see e.g., Adelman, 2011; Balota, Yap & Cortese, 2006; Ferrand, 2007; Ferrand, New, Brysbaert, Keuleers, Bonin, Méot, Augustinova & Pallier, 2010; Grainger & Holcomb, 2009; Grainger & Ziegler, 2011; Spinelli & Ferrand, 2005; Dahan & Magnuson, 2006; Pisoni & Levi, 2007). To date, nearly all research has been based on small studies involving a limited set of monosyllabic words selected according to factorial designs, with a limited number of independent variables matched on a series of control variables. The present project aims to supplement previous studies with a new approach, the "megastudy approach", by (1) using multiple regression designs involving very large-scale stimuli sets; (2) investigating the cognitive processes underlying the visual and spoken word recognition of more complex words, i.e. polysyllabic and polymorphemic words; and (3) using the psychophysical approach (with the repeated measures design) developed recently by Keuleers, Lacey, Rastle, and Brysbaert (2011).

This project has two main phases. Phase 1 of the project will collect reaction times and percent errors in the visual lexical decision task on about 28000 French words and 28000 pseudowords with a small group of participants (n=100). The 28000 words (mainly polysyllabic and polymorphemic words of different lengths and frequencies) will be selected among the 130,000 distinct lexical entries available in Lexique (www.lexique.org; New, Pallier, Brysbaert, & Ferrand, 2004). We will also include inflected forms (such as feminine, plural, and verbal forms). Thanks to this mega-corpus, we will provide answers to some important unresolved theoretical issues in the field of visual word recognition. Collected reaction times will be submitted to multiple regression analyses (linear mixed effects: Baayen, Davidson, & Bates, 2008) in order to study the influence of continuous lexical variables that used to be treated as categorized in factorial designs.

Phase 2 of the project will collect reaction times and percent errors on the same number of words and pseudowords, in a modality never tested before at such large-scale, namely the auditory modality. Megastudies are virtually nonexistent in auditory word recognition research and the literature on auditory word recognition has been dominated by experimental studies. It is therefore crucial to provide and explore an auditory analogue of what has been already done in visual word recognition. Presenting auditory stimuli will imply more effort than presenting visual stimuli, but it is worth trying because factors specific to the auditory modality are influencing auditory word recognition (e.g., phonological neighborhood density, stimulus duration, uniqueness point, etc.) in plus of the usual factors found in visual word recognition (e.g., word frequency, length in letters and syllables, semantic neighbors, etc.).

To succeed in the realization of this ambitious project, we have put together a dynamic and interdisciplinary team (whose members have already worked and published together), which is extremely competent in the areas of psycholinguistics and data mining.

The collected reaction times and the sophisticated analyses (mixed models) we will conduct will allow us to (1) understand more precisely the functional architecture of the different levels of processing involved in both visual and spoken word recognition, (2) detail the nature of the representation on which these processes apply, and (3) study the type of coding (orthographic, phonological, morphological, semantic) used by these different levels of processing. These results will be crucial for models of reading and spoken word recognition. Overall, this work will lead us to a better understanding of factors at play in visual and spoken word recognition.

Project coordination

Ludovic FERRAND (Laboratoire de Psychologie Sociale et Cognitive (CNRS UMR 6024)) – Ludovic.Ferrand@univ-bpclermont.fr

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

UNICOG Unité INSERM 992 de Neuroimagerie Cognitive
LPNC Laboratoire de Psychologie et NeuroCognition (CNRS UMR 5105)
CNRS DR12 - LPC Centre National de Recherche Scientifique Délégation Provence et Corse - Laboratoire de Psychologie Cognitive
LAPSCO Laboratoire de Psychologie Sociale et Cognitive (CNRS UMR 6024)

Help of the ANR 199,067 euros
Beginning and duration of the scientific project: January 2013 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter