

As these changes of the world around us are often foreseeable, current object representations can be based on preceding ones. Objects can slightly change their appearance from moment to moment due to movement or changed lighting, without changing their identity.

We create visual objects by integrating features that belong together 1 and maintain object representations in visual working memory (WM) 2, which we can access flexibly when objects are no longer visible.

Visual cognition relies on the interplay between perception and memory. As this reflects temporal dependencies in natural settings, our findings reveal a mechanism that integrates corresponding content and context features to support stable representations of individualized objects over time. Apparently, the binding of content and context features is not erased but rather carried over to the subsequent memory episode. In four experiments, we observe a stronger serial dependence between objects that share the same context features across trials. We test whether congruent context features, in addition to content similarity, support serial dependence. In a memory task, objects can be differentiated by their to-be-memorized feature (content) as well as accompanying discriminative features (context). The question of how we selectively create temporal stability of several objects remains unsolved. So far, it has been studied in situations that comprised only a single object. Serial dependence is thought to promote perceptual stability by compensating for small changes of an object’s appearance across memory episodes.
