Attention, Perception, & Psychophysics - AP&P

Scannen Sie den QR-Code, um den Link zu diesem Titel zu erhalten.

Psychologie
Springer Customer Service Center GmbH
1943-3921
Englisch
Titelinformationen
Aktuelles
Frühere Ausgaben

Attention, Perception, & Psychophysics - AP&P

Titel Informationen
Attention, Perception, & Psychophysics is an official journal of the Psychonomic Society. It spans all areas of research in sensory processes, perception, attention, and psychophysics. Most articles published are reports of experimental work; the journal also presents theoretical, integrative, and evaluative reviews.
Meine Notizen
Attribute amnesia can be modulated by foveal presentation and the pre-allocation of endogenous spatial attention

Even in sparse visual environments, observers may not be able to report features of objects they have just encountered in a surprise question. Attribute amnesia and seeing without knowing describe report failures for irrelevant features of objects that have been processed to some extent in the primary task. Both phenomena are attributed to the exclusive selection of relevant information for memory consolidation or for awareness, respectively. While attribute amnesia was found even for irrelevant attributes of the target in the primary task, seeing without knowing was not observed when a single object was presented foveally. To elucidate this discrepancy, we examined report failures for irrelevant attributes of single target objects, which were presented either in the fovea or in the periphery, and either at cued or uncued locations. On a surprise trial, observers were able to report the irrelevant shape and color of the target object when it was presented foveally. However, presenting the same object just slightly away from the fovea led to report failures for shape. Introducing a valid peripheral cue prior to target presentation reduced report failures for shape when the cue was predictive of the target location, suggesting that the pre-allocation of endogenous spatial attention promoted the processing of irrelevant shape information. In accordance with previous research, we suggest that these modulations are due to differences in late selection for conscious awareness or consolidation in working memory.

Correction to: A target contrast signal theory of parallel processing in goal-directed search
During production of the article, Figure 4 was incorrectly used twice in the initial article, so it appeared both as Figure 4 and Figure 5 in the article.
A Limiting Channel Capacity of Visual Perception: Spreading Attention Divides the Rates of Perceptual Processes

This study investigated effects of divided attention on the temporal processes of perception. During continuous watch periods, observers responded to sudden changes in the color or direction of any one of a set of moving objects. The set size of moving objects was a primary variable. A simple detection task required responses to any display change, and a selective task required responses to a subset of the changes. Detection rates at successive points in time were measured by response time (RT) hazard functions.

The principal finding was that increasing the set size divided the detection rates—and these divisive effects were essentially constant over time and over the time-varying influence of the target signals and response tasks. The set size, visual target signal, and response task exerted mutually invariant influence on detection rates at given times, indicating independent joint contributions of parallel component processes. The lawful structure of these effects was measured by RT hazard functions but not by RTs as such. The results generalized the time-invariant divisive effects of set size on visual process rates found by Lappin, Morse, & Seiffert (Attention, Perception, & Psychophysics, 78, 2469–2493, 2016). These findings suggest that the rate of visual perception has a limiting channel capacity.

Perception of means, sums, and areas

In this age of data visualization, it is important to understand our perception of the symbols that are used. For example, does the perceived size of a disc correspond most closely to its area, diameter, circumference, or some other measure? When multiple items are present, this becomes a question of ensemble perception. Here, we compare observers’ performance across three different tasks: judgments of (i) the mean diameter, (ii) the total diameter, or (iii) the total area of (N = 1, 2, 3, or 7) test circles compared with a single reference circle. We draw a parallel between Anne Treisman’s feature integration theory and Daniel Kahneman’s cognitive systems, comparing the preattentive stage to System 1, and the focused attention stage to System 2. In accordance with Kahneman’s prediction, average size (diameter) of the geometric figures can be judged with considerable accuracy, but the total diameter of the same figures cannot. Like the total length, the cumulative area covered by circles was also judged considerably less accurately than the mean diameter. Differences in efficiency between these three tasks illustrate powerful constraints upon visual processing: The visual system is well adapted for the perception of the mean size while there are no analogous mechanisms for the accurate perception of the total length or cumulative area. Thus, in visualizing data, using bubble charts proportional to area may be misleading as our visual system seems better adapted to perceive disc size by the radius rather than the area.

Free-choice and forced-choice actions: Shared representations and conservation of cognitive effort

We examined two questions regarding the interplay of planned and ongoing actions. First: Do endogenous (free-choice) and exogenous (forced-choice) triggers of action plans activate similar cognitive representations? And, second: Are free-choice decisions biased by future action goals retained in working memory? Participants planned and retained a forced-choice action to one visual event (A) while executing an immediate forced-choice or free-choice action (action B) to a second visual event (B); then the retained action (A) was executed. We found performance costs for action B if the two action plans partly overlapped versus did not overlap (partial repetition costs). This held true even when action B required a free-choice response indicating that forced-choice and free-choice actions are represented similarly. Partial repetition costs for free-choice actions were evident regardless of whether participants did or did not show free-choice response biases. Also, a subset of participants showed a bias to freely choose actions that did not overlap (vs. did overlap) with the action plan retained in memory, which led to improved performance in executing action B and recalling action A. Because cognitive effort is likely required to resolve feature code competition and confusion assumed to underlie partial repetition costs, this free-choice decision bias may serve to conserve cognitive effort and preserve the future action goal retained in working memory.

Natural music context biases musical instrument categorization

Perception of sounds occurs in the context of surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, categorization of later sounds becomes biased through spectral contrast effects (SCEs). Past research has shown SCEs to bias categorization of speech and music alike. Recent studies have extended SCEs to naturalistic listening conditions when the inherent spectral composition of (unfiltered) sentences biased speech categorization. Here, we tested whether natural (unfiltered) music would similarly bias categorization of French horn and tenor saxophone targets. Preceding contexts were either solo performances of the French horn or tenor saxophone (unfiltered; 1 second duration in Experiment 1, or 3 seconds duration in Experiment 2) or a string quintet processed to emphasize frequencies in the horn or saxophone (filtered; 1 second duration). Both approaches produced SCEs, producing more “saxophone” responses following horn / horn-like contexts and vice versa. One-second filtered contexts produced SCEs as in previous studies, but 1-second unfiltered contexts did not. Three-second unfiltered contexts biased perception, but to a lesser degree than filtered contexts did. These results extend SCEs in musical instrument categorization to everyday listening conditions.

Task representation affects the boundaries of behavioral slowing following an error

Researchers have recognized the role that task representation plays in our behavior for many years. However, the specific influence that the structure of one’s task representation has on executive functioning has only recently been investigated. Prior research suggests that adjustments of cognitive control are affected by subtle manipulations of aspects of the stimulus–response pairs within and across task sets. This work has focused on examples of cognitive control such as response preparation, dual-task performance, and the congruency sequence effect. The current study investigates the effect of task representation on another example of control, post-error slowing. To determine if factors that influence how people represent a task affect how behavior is adjusted after an error, an adaptive attention-shifting task was developed with multiple task delimiting features. Participants were randomly assigned to a separate task set (two task sets) or an integrated task set (one task set) group. For the separate set group, the task sets switched after each trial. Results showed that only the integrated set group exhibited post-error slowing. This suggests that task representation influences the boundaries of cognitive control adjustments and has implications for our understanding of how control is organized when adjusting to errors in performance.

Holistic word processing is correlated with efficiency in visual word recognition

Holistic processing of visual words (i.e., obligatory encoding of/attending to all letters of a word) could be a marker of expert word recognition. In the present study, we thus examined for the first time whether there is a direct relation between the word-composite effect (i.e., all parts of a visual word are fully processed when observers perform a task on a word part) and fast access to the orthographic lexicon by visual word experts (i.e., fluent adult readers). We adopted an individual differences approach and used the word-frequency effect (i.e., faster recognition of high- than low-frequency words) in an independent lexical decision task as a proxy of fast access to lexical orthographic representations. Fluent readers with larger word-composite effect showed smaller word-frequency effect. This correlation was mainly driven by an association between a larger composite effect and faster lexical decision on low-frequency words, probably because these lexical representations are less stable and integrated/unitized, hence allowing differentiating among fluent readers. We thus showed that holistic processing of visual words is indeed related to higher efficiency in visual word recognition by skilled readers.

Talker normalization is mediated by structured indexical information

Speech perception is challenged by indexical variability. A litany of studies on talker normalization have demonstrated that hearing multiple talkers incurs processing costs (e.g., lower accuracy, increased response time) compared to hearing a single talker. However, when reframing these studies in terms of stimulus structure, it is evident that past tests of multiple-talker (i.e., low structure) and single-talker (i.e., high structure) conditions are not representative of the graded nature of indexical variation in the environment. Here we tested the hypothesis that processing costs incurred by multiple-talker conditions would abate given increased stimulus structure. We tested this hypothesis by manipulating the degree to which talkers’ voices differed acoustically (Experiment 1) and also the frequency with which talkers’ voices changed (Experiment 2) in multiple-talker conditions. Listeners performed a speeded classification task for words containing vowels that varied in acoustic-phonemic ambiguity. In Experiment 1, response times progressively decreased as acoustic variability among talkers’ voices decreased. In Experiment 2, blocking talkers within mixed-talker conditions led to more similar response times among single-talker and multiple-talker conditions. Neither result interacted with acoustic-phonemic ambiguity of the target vowels. Thus, the results showed that indexical structure mediated the processing costs incurred by hearing different talkers. This is consistent with the Efficient Coding Hypothesis, which proposes that sensory and perceptual processing are facilitated by stimulus structure. Defining the roles and limits of stimulus structure on speech perception is an important direction for future research.

Alertness and cognitive control: Interactions in the spatial Stroop task

Cognitive control over information processing can be implemented by selective attention, but it is often suboptimal, as indicated by congruency effects arising from processing of irrelevant stimulus features. Research has revealed that congruency effects in some tasks are larger when subjects are more alert, and it has been suggested that this alerting–congruency interaction might be associated with spatial information processing. The author investigated the generality of the interaction by conducting a preregistered set of four experiments in which alertness was manipulated in variants of the spatial Stroop task, which involved classifying the spatial meaning of a stimulus presented at an irrelevant position. Regardless of stimulus type (arrows or words) and spatial dimension (horizontal or vertical), significant alerting–congruency interactions for response times were found in all experiments. The results are consistent with the suggestion that spatial attention and spatial information processing are important sources of the interaction, with implications for understanding how alertness is related to cognitive control.

Perception of being observed by a speaker alters gaze behavior

Previous research has shown that gaze behavior of a speaker’s face during speech encoding is influenced by an array of factors relating to the quality of the speech signal and the encoding task. In these studies, participants were aware they were viewing pre-recorded stimuli of a speaker that is not representative of natural social interactions in which an interlocutor can observe one’s gaze direction, potentially affecting fixation behavior due to communicative and social considerations. To assess the potential role of these factors during speech encoding, we compared fixation behavior during a speech-encoding task under two conditions: in the “real-time” condition, we used deception to convince participants that they were interacting with a live person who was able to see and hear them through online remote video communication. In the “pre-recorded” condition, participants were correctly informed they were watching a previously recorded video. We found that participants fixated the interlocutor’s face significantly less in the real-time condition than the pre-recorded condition. When participants did look at the face, they fixated the mouth at a higher proportion of the time in the pre-recorded condition versus the real-time condition. These findings suggest that people engage in avoidance of potentially useful speech-directed fixations when they believe their fixations are being observed and demonstrate that social factors play a significant role in fixation behavior during speech encoding.

Cross-modal psychological refractory period in vision, audition, and haptics

People’s parallel-processing ability is limited, as demonstrated by the psychological refractory period (PRP) effect: The reaction time to the second stimulus (RT2) increases as the stimulus onset asynchrony (SOA) between two stimuli decreases. Most theoretical models of PRP are independent of modalities. Previous research on PRP mainly focused on vision and audition as input modalities; tactile stimuli have not been fully explored. Research using other paradigms and involving tactile stimuli, however, found that dual-task performance depended on input modalities. This study explored PRP with all the combinations of input modalities. Thirty participants judged the magnitude (small or large) of two stimuli presented in different modalities with an SOA of 75–1,200 ms. PRP effect was observed, i.e., RT2 increased with a decreasing SOA, in all the modalities. Only in the auditory-tactile condition did the accuracy of Task 2 decrease with a decreasing SOA. In the auditory-tactile and tactile-visual conditions, RT to the first stimulus also increased with a decreasing SOA. Current models could only explain part of the results, and modality characteristics help to explain the overall data pattern better. Limitations and directions for future studies regarding reaction time, task difficulty, and response modalities are discussed.

Visual information is required to reduce the global effect

When a distractor appears in close proximity to a saccade target, the saccadic end point is biased towards the distractor. This so-called global effect reduces with the latency of the saccade if the saccade is visually guided. We recently reported that the global effect does not reduce with the latency of a double-step memory-guided saccade. The aim of this study was to investigate why the global effect in memory-guided saccades does not show the typically observed reduction with saccadic latency. One possibility is that reduction of the global effect requires continuous access to visual information about target and distractor locations, which is lacking in the case of a memory-guided saccade. Alternatively, participants may be inclined to routinely preprogram a memory-guided saccade at the moment the visual information disappears, with the result that a memory-guided saccade is typically programmed on the basis of an earlier representation than necessary. To distinguish between these alternatives, two potential targets were presented, and participants were asked to make a saccade to one of them after a delay. In one condition, the target identity was precued, allowing preprogramming of the saccade, while in another condition, it was revealed by a retro cue after the delay. The global effect remained present in both conditions. Increasing visual exposure of target and distractor led to a reduction of the global effect, irrespective of whether participants could preprogram a saccade or not. The results suggest that continuous access to visual information is required in order to eliminate the global effect.

Search and concealment strategies in the spatiotemporal domain

Although visual search studies have primarily focused on search behavior, concealment behavior is also important in the real world. However, previous studies in this regard are limited in that their findings about search and concealment strategies are restricted to the spatial (two-dimensional) domain. Thus, this study evaluated strategies during three-dimensional and temporal (i.e., spatiotemporal) search and concealment to determine whether participants would indicate where they would hide or find a target in a temporal sequence of items. The items were stacked in an upward (Experiments 1–3) or downward (Experiment 4) direction and three factors were manipulated: scenario (hide vs. seek), partner type (friend vs. foe), and oddball (unique item in the sequence; present vs. absent). Participants in both the hide and seek scenarios frequently selected the oddball for friends but not foes, which suggests that they applied common strategies because the oddball automatically attracts attention and can be readily discovered by friends. Additionally, a principle unique to the spatiotemporal domain was revealed, i.e., when the oddball was absent, participants in both scenarios frequently selected the topmost item of the stacked layer for friends, regardless of temporal order, whereas they selected the first item in the sequence for foes, regardless of the stacked direction. These principles were not affected by visual masking or number of items in the sequence. Taken together, these results suggest that finding and hiding positions in the spatiotemporal domain rely on the presence of salient items and physical accessibility or temporal remoteness, according to partner type.

Correction to: Context effects on reproduced magnitudes from short-term and long-term memory
The citation of Hellström (1985) in the body and the reference section of this article is incorrectly printed as Helström.
Scene memory and spatial inhibition in visual search

Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.

Searching for a face in the crowd: Pitfalls and unexplored possibilities

Finding a face in a crowd is a real-world analog to visual search, but extending the visual search method to such complex social stimuli is rife with potential pitfalls. We need look no further than the well-cited notion that angry faces “pop out” of crowds to find evidence that stimulus confounds can lead to incorrect inferences. Indeed, long before the recent replication crisis in social psychology, stimulus confounds led to repeated demonstrations of spurious effects that were misattributed to adaptive cognitive design. We will first discuss how researchers refuted these errors with systematic “face in the crowd” experiments. We will then contend that these more careful studies revealed something that may actually be adaptive, but at the level of the signal: Happy facial expressions seem designed to be detected efficiently. We will close by suggesting that participant-level manipulations can be leveraged to reveal strategic shifts in performance in the visual search for complex stimuli such as faces. Because stimulus-level effects are held constant across such manipulations, the technique affords strong inferences about the psychological underpinnings of searching for a face in the crowd.

Recognition-induced forgetting is caused by episodic, not semantic, memory retrieval tasks

Recognition-induced forgetting is a within-category forgetting effect that results from accessing memory representations. Advantages of this paradigm include the possibility of testing the memory of young children using visual objects before they can read, the testing of multiple types of stimuli, and use with animal models. Yet it is unknown whether just episodic memory tasks (Have you seen this before?) or also semantic memory tasks (Is this bigger than a loaf of bread?) will lead to this forgetting effect. This distinction will be critical in establishing a model of recognition-induced forgetting. Here, we implemented a design in which both these tasks were used in the same experiment to determine which was leading to recognition-induced forgetting. We found that episodic memory tasks, but not semantic memory tasks, created within-category forgetting. These results show that the difference-of-Gaussian forgetting function of recognition-induced forgetting is triggered by episodic memory tasks and is not driven by the same underlying memory signal as semantic memory.

Correction to: Desirable and undesirable difficulties: Influences of variability, training schedule, and aptitude on nonnative phonetic learning
Due to a production error, some IPA symbols were not included. The original article has been corrected.
The attractiveness of salient distractors to reaching movements is task dependent

Previous studies in visual attention and oculomotor research showed that a physically salient distractor does not always capture attention or the eyes. Under certain top-down task sets, a salient distractor can be actively suppressed, avoiding capture. Even though previous studies showed that reaching movements are also influenced by salient distractors, it is unclear if and how a mechanism of active suppression of distractors would affect reaching movements. Active suppression might also explain why some studies find reaching movements to curve towards a distractor, while others find reaching movements to curve away. In this study, we varied the top-down task set in two separate experiments by manipulating the certainty about the target location. Participants had to reach for a diamond present among three circles. In Experiments 1 and 3, participants had to search for the reach targets; hence, the target’s location certainty was low. In Experiments 2 and 3, the target’s location was cued before the reach; hence, the target’s location certainty was high. We found that reaches curved towards the physically salient, color singleton, distractor in the search-to-reach task (Experiments 1 and 3), but not in the cued reach task (Experiments 2 and 3). Thus, the saliency of the distractor only attracted reaching movements when the certainty of the target’s location was low. Our findings suggest that the attractiveness of physically salient distractors to reaching movements depends on the top-down task set. The results can be explained by the effect of active attentional suppression on the competition between movement plans.

A target contrast signal theory of parallel processing in goal-directed search

Feature Integration Theory (FIT) set out the groundwork for much of the work in visual cognition since its publication. One of the most important legacies of this theory has been the emphasis on feature-specific processing. Nowadays, visual features are thought of as a sort of currency of visual attention (e.g., features can be attended, processing of attended features is enhanced), and attended features are thought to guide attention towards likely targets in a scene. Here we propose an alternative theory – the Target Contrast Signal Theory – based on the idea that when we search for a specific target, it is not the target-specific features that guide our attention towards the target; rather, what determines behavior is the result of an active comparison between the target template in mind and every element present in the scene. This comparison occurs in parallel and is aimed at rejecting from consideration items that peripheral vision can confidently reject as being non-targets. The speed at which each item is evaluated is determined by the overall contrast between that item and the target template. We present computational simulations to demonstrate the workings of the theory as well as eye-movement data that support core predictions of the theory. The theory is discussed in the context of FIT and other important theories of visual search.

Slow and fast beat sequences are represented differently through space

The Spatial-Numerical Association of Response Codes (SNARC) suggests the existence of an association between number magnitude and response position, with faster left-hand responses to small numbers and faster right-hand responses to large numbers. Recent studies have revealed similar spatial association effects for non-numerical magnitudes, such as temporal durations and musical stimuli. In the present study we investigated whether a spatial association effect exists between music tempo, expressed in beats per minutes (bpm), and response position. In particular, we were interested in whether this effect is consistent through different bpm ranges. We asked participants to judge whether a target beat sequence was faster or slower than a reference sequence. Three groups of participants judged beat sequences from three different bpm ranges, a wide range (40, 80, 160, 200 bpm) and two narrow ranges (“slow” tempo, 40, 56, 88, 104 bpm; “fast” tempo 133, 150, 184, 201 bpm). Results showed a clear SNARC-like effect for music tempo only in the narrow “fast” tempo range, with faster left-hand responses to 133 and 150 bpm and faster right-hand responses to 184 and 201 bpm. Conversely, a similar association did not emerge in the wide nor in the narrow "slow" tempo ranges. This evidence suggests that music tempo is spatially represented as other continuous quantities, but its representation might be narrowed to a particular range of tempos. Moreover, music tempo and temporal duration might be represented across space with an opposite direction.

Loads of unconscious processing: The role of perceptual load in processing unattended stimuli during inattentional blindness

Inattentional blindness describes the failure to detect an unexpected but clearly visible object when our attention is engaged elsewhere. While the factors that determine the occurrence of inattentional blindness are already well understood, there is still a lot to learn about whether and how we process unexpected objects that go unnoticed. Only recently it was shown that although not consciously aware, characteristics of these stimuli can interfere with a primary task: Classification of to-be-attended stimuli was slower when the content of the task-irrelevant, undetected stimulus contradicted that of the attended, to-be-judged stimuli. According to Lavie’s perceptual load model, irrelevant stimuli are likely to reach awareness under conditions of low perceptual load, while they remain undetected under high load, as attentional resources are restricted to the content of focused attention. In the present study, we investigated the applicability of Lavie’s predictions for the processing of stimuli that remain unconscious due to inattentional blindness. In two experiments, we replicated that unconsciously processed stimuli can interfere with intended responses. Also, our manipulation of perceptual load did have an effect on primary task performance. However, against our hypothesis, these effects did not interact with each other. Thus, our results suggest that high perceptual load cannot prevent task-irrelevant stimuli that remain undetected from being processed to an extent that enables them to affect performance in a primary task.

Understanding the visual perception of awkward body movements: How interactions go awry

Dyadic interactions can sometimes elicit a disconcerting response from viewers, generating a sense of “awkwardness.” Despite the ubiquity of awkward social interactions in daily life, it remains unknown what visual cues signal the oddity of human interactions and yield the subjective impression of awkwardness. In the present experiments, we focused on a range of greeting behaviors (handshake, fist bump, high five) to examine both the inherent objectivity and impact of contextual and kinematic information in the social evaluation of awkwardness. In Experiment 1, participants were asked to discriminate whether greeting behaviors presented in raw videos were awkward or natural, and if judged as awkward, participants provided verbal descriptions regarding the awkward greeting behaviors. Participants showed consensus in judging awkwardness from raw videos, with a high proportion of congruent responses across a range of awkward greeting behaviors. We also found that people used social-related and motor-related words in their descriptions for awkward interactions. Experiment 2 employed advanced computer vision techniques to present the same greeting behaviors in three different display types. All display types preserved kinematic information, but varied contextual information: (1) patch displays presented blurred scenes composed of patches; (2) body displays presented human body figures on a black background; and (3) skeleton displays presented skeletal figures of moving bodies. Participants rated the degree of awkwardness of greeting behaviors. Across display types, participants consistently discriminated awkward and natural greetings, indicating that the kinematics of body movements plays an important role in guiding awkwardness judgments. Multidimensional scaling analysis based on the similarity of awkwardness ratings revealed two primary cues: motor coordination (which accounted for most of the variability in awkwardness judgments) and social coordination. We conclude that the perception of awkwardness, while primarily inferred on the basis of kinematic information, is additionally affected by the perceived social coordination underlying human greeting behaviors.

Spatial filtering restricts the attentional window during both singleton and feature-based visual search

We investigated whether spatial filtering can restrict attentional selectivity during visual search to a currently task-relevant attentional window. While effective filtering has been demonstrated during singleton search, feature-based attention is believed to operate spatially globally across the entire visual field. To test whether spatial filtering depends on search mode, we assessed its efficiency both during feature-guided search with colour-defined targets and during singleton search tasks. Search displays were preceded by spatial cues. Participants responded to target objects at cued/relevant locations, and ignored them when they appeared on the uncued/irrelevant side. In four experiments, electrophysiological markers of attentional selection and distractor suppression (N2pc and PD components) were measured for relevant and irrelevant target-matching objects. During singleton search, N2pc components were triggered by relevant target singletons, but were entirely absent for singletons on the irrelevant side, demonstrating effective spatial filtering. Critically, similar results were found for feature-based search. N2pcs to irrelevant target-colour objects were either absent or strongly attenuated (when these objects were salient), indicating that the feature-based guidance of visual search can be restricted to relevant locations. The presence of PD components to salient objects on the irrelevant side during feature-based and singleton search suggests that spatial filtering involves active distractor suppression. These results challenge the assumption that feature-based attentional guidance is always spatially global. They suggest instead that when advance information about target locations becomes available, effective spatial filtering processes are activated transiently not only in singleton search, but also during search for feature-defined targets.

Task-related gaze control in human crowd navigation

Human crowds provide an interesting case for research on the perception of people. In this study, we investigate how visual information is acquired for (1) navigating human crowds and (2) seeking out social affordances in crowds by studying gaze behavior during human crowd navigation under different task instructions. Observers (n = 11) wore head-mounted eye-tracking glasses and walked two rounds through hallways containing walking crowds (n = 38) and static objects. For round one, observers were instructed to avoid collisions. For round two, observers furthermore had to indicate with a button press whether oncoming people made eye contact. Task performance (walking speed, absence of collisions) was similar across rounds. Fixation durations indicated that heads, bodies, objects, and walls maintained gaze comparably long. Only crowds in the distance maintained gaze relatively longer. We find no compelling evidence that human bodies and heads hold one’s gaze more than objects while navigating crowds. When eye contact was assessed, heads were fixated more often and for a total longer duration, which came at the cost of looking at bodies. We conclude that gaze behavior in crowd navigation is task-dependent, and that not every fixation is strictly necessary for navigating crowds. When explicitly tasked with seeking out potential social affordances, gaze is modulated as a result. We discuss our findings in the light of current theories and models of gaze behavior. Furthermore, we show that in a head-mounted eye-tracking study, a large degree of experimental control can be maintained while many degrees of freedom on the side of the observer remain.

Training attenuates the influence of sensory uncertainty on confidence estimation

Confidence is typically correlated with perceptual sensitivity, but these measures are dissociable. For example, confidence judgements are disproportionately affected by the variability of sensory signals. Here, in a preregistered study we investigate whether this signal variability effect on confidence can be attenuated with training. Participants completed five sessions where they viewed pairs of motion kinematograms and performed comparison judgements on global motion direction, followed by confidence ratings. In pre- and post-training sessions, the range of direction signals within each stimulus was manipulated across four levels. Participants were assigned to one of three training groups, differing as to whether signal range was varied or fixed during training, and whether or not trial-by-trial accuracy feedback was provided. The effect of signal range on confidence was reduced following training, but this result was invariant across the training groups, and did not translate to improved metacognitive insight. These results suggest that the influence of suboptimal heuristics on confidence can be mitigated through experience, but this shift in confidence estimation remains coarse, without improving insight into confidence estimation at the level of individual decisions.

Dyadic and triadic search: Benefits, costs, and predictors of group performance

In daily life, humans often perform visual tasks, such as solving puzzles or searching for a friend in a crowd. Performing these visual searches jointly with a partner can be beneficial: The two task partners can devise effective division of labor strategies and thereby outperform individuals who search alone. To date, it is unknown whether these group benefits scale up to triads or whether the cost of coordinating with others offsets any potential benefit for group sizes above two. To address this question, we compare participants’ performance in a visual search task that they perform either alone, in dyads, or in triads. When the search task is performed jointly, co-actors receive information about each other’s gaze location. After controlling for speed–accuracy trade-offs, we found that triads searched faster than dyads, suggesting that group benefits do scale up to triads. Moreover, we found that the triads’ divided the search space in accordance with the co-actors’ individual search performances but searched less efficiently than dyads. We also present a linear model to predict group benefits, which accounts for 70% of the variance. The model includes our experimental factors and a set of non-redundant predictors, quantifying the similarities in the individual performances, the collaboration between co-actors, and the estimated benefits that co-actors would attain without collaborating. Overall, the present study demonstrates that group benefits scale up to larger group sizes, but the additional gains are attenuated by the increased costs associated with devising effective division of labor strategies.

Flexible target templates improve visual search accuracy for faces depicting emotion

Theories of visual attention hypothesize that target selection depends upon matching visual inputs to a memory representation of the target – i.e., the target or attentional template. Most theories assume that the template contains a veridical copy of target features, but recent studies suggest that target representations may shift "off veridical" from actual target features to increase target-to-distractor distinctiveness. However, these studies have been limited to simple visual features (e.g., orientation, color), which leaves open the question of whether similar principles apply to complex stimuli, such as a face depicting an emotion, the perception of which is known to be shaped by conceptual knowledge. In three studies, we find confirmatory evidence for the hypothesis that attention modulates the representation of an emotional face to increase target-to-distractor distinctiveness. This occurs over-and-above strong pre-existing conceptual and perceptual biases in the representation of individual faces. The results are consistent with the view that visual search accuracy is determined by the representational distance between the target template in memory and distractor information in the environment, not the veridical target and distractor features.

Learned prioritization yields attentional biases through selection history

While numerous studies have provided evidence for selection history as a robust influence on attentional allocation, it is unclear precisely which behavioral factors can result in this form of attentional bias. In the current study, we focus on “learned prioritization” as an underlying mechanism of selection history and its effects on selective attention. We conducted two experiments, each starting with a training phase to ensure that participants learned different stimulus priorities. This was accomplished via a visual search task in which a specific color was consistently more relevant when presented together with another given color. In Experiment 1, one color was always prioritized over another color and inferior to a third color, such that each color had an equal overall priority by the end of the training session. In Experiment 2, the three different colors had unequal priorities at the end of the training session. A subsequent testing phase in which participants had to search for a shape-defined target showed that only stimuli with unequal overall priorities (Experiment 2) affected attentional selection, with increased reaction times when a distractor was presented in a previously high-priority compared with a low-priority color. These results demonstrate that adopting an attentional set where certain stimuli are prioritized over others can result in a lingering attentional bias and further suggest that selection history does not equally operate on all previously selected stimuli. Finally, we propose that findings in value-driven attention studies where high-value and low-value signaling stimuli differentially capture attention may be a result of learned prioritization rather than reward.

Finding an interaction between Stroop congruency and flanker congruency requires a large congruency effect: A within-trial combination of conflict tasks

Responding to a conflict is assumed to trigger attentional-control processes—that is, processes that enable us to activate goal-relevant information and to inhibit irrelevant information. Typically, conflict is induced in tasks, such as the Stroop task (which requires identifying the color of color words) or the flanker task (which requires identifying a central character among flankers). Combining the conflicts within the same trial has been found to result in an interaction in reaction times (RTs), suggesting a generalization of attentional control. However, this interaction was observed when the congruency effect was substantial—that is, when the RT difference between incongruent trials (e.g., the word “green” printed in red for the Stroop task) and congruent trials (e.g., the word “red” printed in red) was large. The purpose of the present study was to investigate whether a large congruency effect is the necessary condition for observing the interaction. To this end, Stroop and flanker tasks were combined, and participants were asked to respond to the color of the central letter/word while ignoring the flanking letters/words. The magnitude of the congruency effect was increased: (a) by testing older adults (Experiment 1), (b) by manipulating the proportion of trials in which participants were asked to respond to the word meaning (Experiment 2), and (c) by using vocal responses (Experiment 3). The results showed an interaction when the Stroop congruency effect was large. Therefore, such interactions can be used to validate or invalidate theoretical explanations only when the precondition—a large congruency effect—is fulfilled.

Delayed disengagement from irrelevant fixation items: Is it generally functional?

In a circular visual search paradigm, the disengagement of attention is automatically delayed when a fixated but irrelevant center item shares features of the target item. Additionally, if mismatching letters are presented on these items, response times (RTs) are slowed further, while matching letters evoke faster responses (Wright, Boot, & Brockmole, 2015a). This is interpreted as a functional reason of the delayed disengagement effect in terms of deeper processing of the fixation item. The purpose of the present study was the generalization of these findings to unfamiliar symbols and to linear instead of circular layouts. Experiments 1 and 2 replicated the functional delayed disengagement effect with letters and symbols. In Experiment 3, the search layout was changed from circular to linear and only saccades from left to right had to be performed. We did not find supportive data for the proposed functional nature of the effect. In Experiments 4 and 5, we tested whether the unidirectional saccade decision, a potential blurring by adjacent items, or a lack of statistical power was the cause of the diminished effects in Experiment 3. With increased sample sizes, the delayed disengagement effect as well as its functional underpinning were now observed consistently. Taken together, our results support prior assumptions that delayed disengagement effects are functionally rooted in a deeper processing of the fixation items. They also generalize to unfamiliar symbols and linear display layouts.

Visual and central attention share a capacity limitation when the demands for serial item selection in visual search are high

Visual and central attention are limited in capacity. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color, form, size), which results in a serial search process. In dual-tasks, central attention is required for response selection, but because central attention is limited in capacity, response selection can only be carried out for one task at a time. Here, we investigated whether visual and central attention rely on a common or on distinct capacity limitations. In two dual-task experiments, participants completed an auditory two-choice discrimination Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (stimulus onset asynchrony [SOA]). In Experiment 1, Task 2 was a triple conjunction search task. Each item consisted of a conjunction of three features, so that target and distractors shared two features. In Experiment 2, Task 2 was a plus conjunction search task, in which target and distractors shared the same four features. The hypotheses for conjunction search time were derived from the locus-of-slack method. While plus conjunction search was performed after response selection in Task 1, a small part of triple conjunction search was still performed in parallel to response selection in Task 1. However, the between-experiment comparison was not significant, indicating that both search tasks may require central attention. Taken together, the present study provides evidence that visual and central attention share a common capacity limitation when conjunction search relies strongly on serial item selection.

When processing costs impact predictive processing: The case of foreign-accented speech and accent experience

Listeners use linguistic information and real-world knowledge to predict upcoming spoken words. However, studies of predictive processing have focused on prediction under optimal listening conditions. We examined the effect of foreign-accented speech on predictive processing. Furthermore, we investigated whether accent-specific experience facilitates predictive processing. Using the visual world paradigm, we demonstrated that although the presence of an accent impedes predictive processing, it does not preclude it. We further showed that as listener experience increases, predictive processing for accented speech increases and begins to approximate the pattern seen for native speech. These results speak to the limitation of the processing resources that must be allocated, leading to a trade-off when listeners are faced with increased uncertainty and more effortful recognition due to a foreign accent.

Desirable and undesirable difficulties: Influences of variability, training schedule, and aptitude on nonnative phonetic learning

Adult listeners often struggle to learn to distinguish speech sounds not present in their native language. High-variability training sets (i.e., stimuli produced by multiple talkers or stimuli that occur in diverse phonological contexts) often result in better retention of the learned information, as well as increased generalization to new instances. However, high-variability training is also more challenging, and not every listener can take advantage of this kind of training. An open question is how variability should be introduced to the learner in order to capitalize on the benefits of such training without derailing the training process. The current study manipulated phonological variability as native English speakers learned a difficult nonnative (Hindi) contrast by presenting the nonnative contrast in the context of two different vowels (/i/ and /u/). In a between-subjects design, variability was manipulated during training and during test. Participants were trained in the evening hours and returned the next morning for reassessment to test for retention of the speech sounds. We found that blocked training was superior to interleaved training for both learning and retention, but for learners in the interleaved training group, higher pretraining aptitude predicted better identification performance. Further, pretraining discrimination aptitude positively predicted changes in phonetic discrimination after a period of off-line consolidation, regardless of the training manipulation. These findings add to a growing literature suggesting that variability may come at a cost in phonetic learning and that aptitude can affect both learning and retention of nonnative speech sounds.

Demystifying visual awareness: Peripheral encoding plus limited decision complexity resolve the paradox of rich visual experience and curious perceptual failures

Human beings subjectively experience a rich visual percept. However, when behavioral experiments probe the details of that percept, observers perform poorly, suggesting that vision is impoverished. What can explain this awareness puzzle? Is the rich percept a mere illusion? How does vision work as well as it does? This paper argues for two important pieces of the solution. First, peripheral vision encodes its inputs using a scheme that preserves a great deal of useful information, while losing the information necessary to perform certain tasks. The tasks rendered difficult by the peripheral encoding include many of those used to probe the details of visual experience. Second, many tasks used to probe attentional and working memory limits are, arguably, inherently difficult, and poor performance on these tasks may indicate limits on decision complexity. Two assumptions are critical to making sense of this hypothesis: (1) All visual perception, conscious or not, results from performing some visual task; and (2) all visual tasks face the same limit on decision complexity. Together, peripheral encoding plus decision complexity can explain a wide variety of phenomena, including vision’s marvelous successes, its quirky failures, and our rich subjective impression of the visual world.

Interleaved lexical and audiovisual information can retune phoneme boundaries

To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.

Assessing introspective awareness of attention capture

Visual attention can sometimes be involuntarily captured by salient stimuli, and this may lead to impaired performance in a variety of real-world tasks. If observers were aware that their attention was being captured, they might be able to exert control and avoid subsequent distraction. However, it is unknown whether observers can detect attention capture when it occurs. In the current study, participants searched for a target shape and attempted to ignore a salient color distractor. On a subset of trials, participants then immediately classified whether the salient distractor captured their attention (“capture” vs. “no capture”). Participants were slower and less accurate at detecting the target on trials on which they reported “capture” than “no capture.” Follow-up experiments revealed that participants specifically detected covert shifts of attention to the salient item. Altogether, these results indicate that observers can have immediate awareness of visual distraction, at least under certain circumstances.

Forty years after feature integration theory: An introduction to the special issue in honor of the contributions of Anne Treisman
The simultaneous oddball: Oddball presentation does not affect simultaneity judgments

The oddball duration illusion describes how a rare or nonrepeated stimulus is perceived as lasting longer than a common or repeated stimulus. It has been argued that the oddball duration illusion could emerge because of an earlier perceived onset of an oddball stimulus. However, most methods used to assess the perceived duration of an oddball stimulus are ill suited to detect onset effects. Therefore, in the current article, I tested the perceived onset of oddball and standard stimuli using a simultaneity judgment task. In Experiments 1 and 2, repetition and rarity of the target stimulus were varied, and participants were required to judge whether the target stimulus and another stimulus were concurrent. In Experiment 3, I tested whether a brief initial stimulus could act as a conditioning stimulus in the oddball duration illusion. This was to ensure an oddball duration illusion could have occurred given the short duration of stimuli in the first two experiments. In both the first two experiments, I found moderate support for no onset-based difference between oddball and nonoddball stimuli. In Experiment 3, I found that a short conditioning stimulus could still lead to the oddball duration illusion occurring, removing this possible explanation for the null result. Experiment 4 showed that an oddball duration illusion could emerge given the rarity of the stimulus and a concurrent sound. In sum, the current article found evidence against an onset-based explanation of the oddball duration illusion.

On the link between attentional search and the oculomotor system: Is preattentive search restricted to the range of eye movements?

It has been proposed that covert visual search can be fast, efficient, and stimulus driven, particularly when the target is defined by a salient single feature, or slow, inefficient, and effortful when the target is defined by a nonsalient conjunction of features. This distinction between fast, stimulus-driven orienting and slow, effortful orienting can be related to the distinction between exogenous spatial attention and endogenous spatial attention. Several studies have shown that exogenous, covert orienting is limited to the range of saccadic eye movements, whereas covert endogenous orienting is independent of the range of saccadic eye movements. The current study examined whether covert visual search is affected in a similar way. Experiment 1 showed that covert visual search for feature singletons was impaired when stimuli were presented beyond the range of saccadic eye movements, whereas conjunction search was unaffected by array position. Experiment 2 replicated and extended this effect by measuring search times at 6 eccentricities. The impairment in covert feature search emerged only when stimuli crossed the effective oculomotor range and remained stable for locations further into the periphery, ruling out the possibility that the results of Experiment 1 were due to a failure to fully compensate for the effects of cortical magnification. The findings are interpreted in terms of biased competition and oculomotor theories of spatial attention. It is concluded that, as with covert exogenous orienting, biological constraints on overt orienting in the oculomotor system constrain covert, preattentive search.

Studying the dynamics of visual search behavior using RT hazard and micro-level speed–accuracy tradeoff functions: A role for recurrent object recognition and cognitive control processes

Thanks to the work of Anne Treisman and many others, the visual search paradigm has become one of the most popular paradigms in the study of visual attention. However, statistics like mean correct response time (RT) and percent error do not usually suffice to decide between the different search models that have been developed. Recently, to move beyond mean performance measures in visual search, RT histograms have been plotted, theoretical waiting time distributions have been fitted, and whole RT and error distributions have been simulated. Here we promote and illustrate the general application of discrete-time hazard analysis to response times, and of micro-level speed–accuracy tradeoff analysis to timed response accuracies. An exploratory analysis of published benchmark search data from feature, conjunction, and spatial configuration search tasks reveals new features of visual search behavior, such as a relatively flat hazard function in the right tail of the RT distributions for all tasks, a clear effect of set size on the shape of the RT distribution for the feature search task, and individual differences in the presence of a systematic pattern of early errors. Our findings suggest that the temporal dynamics of visual search behavior results from a decision process that is temporally modulated by concurrently active recurrent object recognition, learning, and cognitive control processes, next to attentional selection processes.

Ratio effect slope can sometimes be an appropriate metric of the approximate number system sensitivity

The approximate number system (ANS) is believed to be an essential component of numerical understanding. The sensitivity of the ANS has been found to be correlating with various mathematical abilities. Recently, Chesney (2018, Attention, Perception, & Psychophysics, 80[5], 1057–1063) demonstrated that if the ANS sensitivity is measured with the ratio effect slope, the slope may measure the sensitivity imprecisely. The present work extends her findings by demonstrating that mathematically the usability of the ratio effect slope depends on the Weber fraction range of the sample and the ratios of the numbers in the used test. Various indexes presented here can specify whether the use of the ratio effect slope as a replacement for the sigmoid fit is recommended or not. Detailed recommendations and a publicly available script help the researchers to plan or evaluate the use of the ratio effect slope as an ANS sensitivity index.

Response interference by central foils is modulated by dimensions of depression and anxiety

We used a maximum-likelihood-based model selection approach to investigate what aspects of affective traits influence flanker interference in a nonaffective task. A total of 153 undergraduates completed measures of anhedonic depression, anxious arousal, anxious apprehension, and a modified flanker task with two levels of perceptual load. For central foils, the most parsimonious model included load, depression, and anxious arousal. Participants scoring low on the depression and anxious arousal scales exhibited a typical perceptual load effect, with larger interference effects observed under low perceptual load compared with high perceptual load conditions. Increased depression symptoms were associated with a reduced perceptual load effect. However, the load effect reemerged in individuals who scored high on both depression and anxious arousal scales, but to a lesser extent than those scoring low on both. This pattern of results underscores the importance of studying co-occurring affective traits and their interactions in the same sample. For peripherally presented foils, the model that only included load as a factor was more parsimonious than any of the models incorporating affective traits. These findings suggest avenues for future research and highlight the role of diverse affective symptoms on various aspects of nonemotional attentional processing.

Object-based attention generalizes to multisurface objects

When a part of an object is cued, targets presented in other locations on the same object are detected more rapidly and accurately than are targets on other objects. Often in object-based attention experiments, cues and targets appear not only on the same object but also on the same surface. In four psychophysical experiments, we examined whether the “object” of attentional selection was the entire object or one of its surfaces. In Experiment 1, facilitation effects were found for targets on uncued, adjacent surfaces on the same object, even when the cued and uncued surfaces were oriented differently in depth. This suggests that the “object-based” benefits of attention are not restricted to individual surfaces. Experiments 2a and 2b examined the interaction of perceptual grouping and object-based attention. In both experiments, cuing benefits extended across objects when the surfaces of those objects could be grouped, but the effects were not as strong as in Experiment 1, where the surfaces belonged to the same object. The cuing effect was strengthened in Experiment 3 by connecting the cued and target surfaces with an intermediate surface, making them appear to all belong to the same object. Together, the experiments suggest that the objects of attention do not necessarily map onto discrete physical objects defined by bounded surfaces. Instead, attentional selection can be allocated to perceptual groups of surfaces and objects in the same way as it can to a location or to groups of features that define a single object.

Tactile distance anisotropy on the palm: A meta-analysis

Illusions of the perceived distance between two touches on the skin have been studied since the classic work of Weber in the 19th century. For example, anisotropies of perceived tactile distance have been consistently found on several body parts, including the hand dorsum, the forearm, and the face. In each case, tactile distances that are oriented across body width are perceived as being larger than those oriented along body length. Several studies have investigated tactile distance anisotropy on the glabrous skin of the palm of the hand, but they have reached inconsistent conclusions—with some studies finding no anisotropy, and others finding an anisotropy analogous to that found on the dorsum. Given these inconsistencies, the aim of this study was to conduct a systematic meta-analysis of the existing data regarding anisotropy on the palm. A total of ten experiments were identified, which overall provided strong evidence for an anisotropy on the palm (Hedges’s g = 0.521), with distances aligned with hand width being perceived as approximately 10% bigger than distances aligned with hand length. While this anisotropy is analogous to that found on the hand dorsum, it is substantially smaller in magnitude, and the two biases appear to be uncorrelated. The present results show that, despite inconsistent results across studies, the existing data do indicate an anisotropy of tactile distance on the palm of the hand.

Generalization of dimension-based statistical learning

Recent research demonstrates that the relationship between an acoustic dimension and speech categories is not static. Rather, it is influenced by the evolving distribution of dimensional regularity experienced across time, and specific to experienced individual sounds. Three studies examine the nature of this perceptual, dimension-based statistical learning of artificially accented and [p] speech categories in online word recognition by testing generalization of learning across contexts, and testing the effect of a larger word list across which learning is induced. The results indicate that whereas learning of accented and [p] generalizes across contexts, generalization to contexts not experienced in the accent is weaker even for the same speech categories and [p] spoken by the same speaker. The results support a rich model of speech representation that is sensitive to context-dependent variation in the way the acoustic dimensions are related to speech categories.

Certain non-isochronous sound trains are perceived as more isochronous when they start on beat

Perceiving the duration of neighboring time intervals is vital for rhythm perception. We discovered a phenomenon in which the perceived equality/inequality of neighboring time intervals in a sound sequence is changed by its metrical interpretation. The target sound sequence consisted of eight short sound bursts marking seven neighboring time intervals, which were repetitions of two durations (T1 and T2) presented in alternation (T1-T2-T1-T2 …). There were three tempos, which corresponded to T1 + T2 being 210, 420, and 630 ms. The physical difference between T1 and T2 (T1 – T2) was varied systematically for each tempo in the ranges of −100 to 100 ms (when T1 + T2 was 210 or 420 ms) or −150 to 150 ms (when T1 + T2 was 630 ms). Participants reported the level of perceived equality/inequality of these neighboring time intervals. For each target sequence, four isochronous lower-pitched preceding sounds were added at different phases so that the beginning of either T1 (Beat-on-T1 condition) or T2 (Beat-on-T2 condition) coincided with the beat induced by these preceding sounds. When T2 was longer than T1 by up to 60 ms, the neighboring time intervals of the same target sequence were perceived as more “equal” in the Beat-on-T1 condition compared with the Beat-on-T2 condition. Such a difference in the perceived equality/inequality appeared significantly only at the intermediate tempo of T1 + T2 = 420 ms. The difference in equality/inequality perception at limited temporal conditions could be accounted for by the occurrence of an illusion in time perception called time-shrinking.

Context affects implicit learning of spatial bias depending on task relevance

Recent studies on the probability cueing effect have shown that a spatial bias emerges toward a location where a target frequently appears. In the present study, we explored whether such spatial bias can be flexibly shifted when the target-frequent location changes depending on the given context. In four consecutive experiments, participants performed a visual search task within two distinct contexts that predicted the visual quadrant that was more likely to contain a target. We found that spatial attention was equally biased toward two target-frequent quadrants, regardless of context (context-independent spatial bias), when the context information was not mandatory for accurate visual search. Conversely, when the context became critical for the visual search task, the spatial bias shifted significantly more to the target-frequent quadrant predicted by the given context (context-specific spatial bias). These results show that the task relevance of context determines whether probabilistic knowledge can be learned flexibly in a context-specific manner.

Rapid and coarse face detection: With a lack of evidence for a nasal-temporal asymmetry

Humans have structures dedicated to the processing of faces, which include cortical components (e.g., areas in occipital and temporal lobes) and subcortical components (e.g., superior colliculus and amygdala). Although faces are processed more quickly than stimuli from other categories, there is a lack of consensus regarding whether subcortical structures are responsible for rapid face processing. In order to probe this, we exploited the asymmetry in the strength of projections to subcortical structures between the nasal and temporal hemiretina. Participants detected faces from unrecognizable control stimuli and performed the same task for houses. In Experiments 1 and 3, at the fastest reaction times, participants detected faces more accurately than houses. However, there was no benefit of presenting to the subcortical pathway. In Experiment 2, we probed the coarseness of the rapid pathway, making the foil stimuli more similar to faces and houses. This eliminated the rapid detection advantage, suggesting that rapid face processing is limited to coarse representations. In Experiment 4, we sought to determine whether the natural difference between spatial frequencies of faces and houses were driving the effects seen in Experiments 1 and 3. We spatially filtered the faces and houses so that they were matched. Better rapid detection was again found for faces relative to houses, but we found no benefit of preferentially presenting to the subcortical pathway. Taken together, the results of our experiments suggest a coarse rapid detection mechanism, which was not dependent on spatial frequency, with no advantage for presenting preferentially to subcortical structures.

Frühere Ausgaben ausblenden
Druckausgabe nutzen
Zu diesem Eintrag wird kein Absprung zu einer Onlineresource bereitgestellt.

Sollte eine Druckausgabe vorhanden sein, so wird empfohlen diese zu verwenden.
Änderungen bestätigen

message.confirm.general

Der Leseliste hinzufügen
Sie sind nicht eingeloggt. Gespeicherte Artikel können verloren gehen.
Der Leseliste hinzufügen
    Neuen Alert anlegen
    Ihr Upload wird ausgeführt
    Empfehlen
    Der Paperboy Server ist momentan nicht erreichbar.
    Bitte versuchen Sie später erneut.
    Zeitüberschreitung
    In wenigen Sekunden werden Sie aus Sicherheitsgründen automatisch abgemeldet.
    Mit dem Schließen dieses Dialoges wird dieser Prozess unterbrochen.
    Benachrichtigung

    Fehler aufgetreten
    EU-Datenschutzgrundverordnung

    Die DSGVO stärkt die Datenschutzrechte europaweit für uns alle. Bei vub haben wir aus diesem Anlass unsere Datenschutzerklärung grundlegend verändert:

    • Wir stellen noch übersichtlicher dar, wie und wofür wir personenbezogene Daten verarbeiten (wenn überhaupt, denn das Verwerten Ihrer persönlichen Daten ist überhaupt nicht unser Geschäft!).
    • Wir informieren in unserer neuen Datenschutzerklärung über die Kundenrechte.
    • Wir haben die Datenschutzerklärung übersichtlicher gestaltet.
    • Ab dem 25. Mai 2018 können Sie in Ihrem Kundenkonto unter „meine Einstellungen“ den gewünschten Datenschutz selbst einstellen.

    Bei Fragen wenden Sie sich bitte jederzeit an unseren vub-Kundenservice und Ihre bekannten Ansprechpartner unter premiumservice@vub.de.

    Customer Experience Umfrage

    Liebe Nutzer und Nutzerinnen des vub | Paperboy,

    wir entwickeln den vub | Paperboy stetig weiter und bitten Sie deshalb um die Teilnahme an unserer Umfrage. Die Umfrage ist abgesprochen mit Ihrem Arbeitgeber/Ihrer Bibliothek und dauert 3-5 Minuten. Außerdem ist die Umfrage anonym. Die Umfragebewertungen werden ausschließlich für diesen Zweck gespeichert, verarbeitet und verwendet.
    Wir danken Ihnen für die Hilfe!

    Ihr vub | Paperboy Team 🙂

    Nicht mehr anzeigen
    Zugangsdaten