Séminaire de Lucas Parra
Jeudi 15 janvier 2026, à 11h, Salle Laurent Vinay, INT
Lucas Parra (City College of New York) invité par Davide Reato
Do We Predict What We See Next? How the Brain Integrates Semantic Information Across Eye Movements
Abstract: When we look at a scene, our eyes don’t scan smoothly. They move in quick jumps, called saccades, and then pause briefly to gather information in what’s called a fixation. How does our brain build a stable, semantic picture from this series of snapshots? We know the brain can integrate simple visual details (like color) between these fixations. But we explored a bigger question: does the brain also integrate the semantic information of what we see? Does it predict the semantics of what you’ll look at before your next saccade even lands you on that new fixation point? We propose that the brain is constantly trying to predict the ‘semantics’ of what it will see in the next fixation. It doesn’t try to guess every single pixel, but rather the general concept. This process may act as a form of self-training for the brain, similar to how modern AI systems learn about the world. Our hypothesis suggests the brain computes a “prediction error” signal—a signal that’s small if the new information from a fixation matches the prediction and large if it’s surprising. In this talk, I will present preliminary evidence from brain recordings in humans and primates suggesting this “semantic error” signal is real. This indicates that a key part of seeing isn’t just processing what’s in front of us, but constantly predicting what’s coming next.