Attention, Perception, & Psychophysics - AP&P

Scannen Sie den QR-Code, um den Link zu diesem Titel zu erhalten.

Psychologie
Springer Customer Service Center GmbH
1943-3921
Englisch
Titelinformationen
Aktuelles
Frühere Ausgaben

Attention, Perception, & Psychophysics - AP&P

Titel Informationen
Attention, Perception, & Psychophysics is an official journal of the Psychonomic Society. It spans all areas of research in sensory processes, perception, attention, and psychophysics. Most articles published are reports of experimental work; the journal also presents theoretical, integrative, and evaluative reviews.
Meine Notizen
Visual noise consisting of X-junctions has only a minimal adverse effect on object recognition

In 1968, Guzman showed that the myriad of surfaces composing a highly complex and novel assemblage of volumes can readily be assigned to their appropriate volumes in terms of the constraints offered by the vertices of coterminating edges. Of particular importance was the L-vertex, produced by the cotermination of two contours, which provides strong evidence for the termination of a 2-D surface. An X-junction, formed by the crossing of two contours without a change of direction at the crossing, played no role in the segmentation of a scene. If the potency of noise elements to affect recognition performance reflects their relevancy to the segmentation of scenes, as was suggested by Guzman, gaps in an object’s contours bounded by irrelevant X-junctions would be expected to have little or no adverse effect on shape-based object recognition, whereas gaps bounded by L-junctions would be expected to have a strong deleterious effect when they disrupt the smooth continuation of contours. Guzman’s roles for the various vertices and junctions have never been put to systematic test with respect to human object recognition. By adding identical noise contours to line drawings of objects that produced either L-vertices or X-junctions, these shape features could be compared with respect to their disruption of object recognition. Guzman’s insights that irrelevant L-vertices should be highly disruptive and irrelevant X-vertices would have only a minimal deleterious effect were confirmed.

Interacting hands draw attention during scene observation

In this study I examined the role of the hands in scene perception. In Experiment 1, eye movements during free observation of natural scenes were analyzed. Fixations to faces and hands were compared under several conditions, including scenes with and without faces, with and without hands, and without a person. The hands were either resting (e.g., lying on the knees) or interacting with objects (e.g., holding a bottle). Faces held an absolute attentional advantage, regardless of hand presence. Importantly, fixations to interacting hands were faster and more frequent than those to resting hands, suggesting attentional priority to interacting hands. The interacting-hand advantage could not be attributed to perceptual saliency or to the hand-owner (i.e., the depicted person) gaze being directed at the interacting hand. Experiment 2 confirmed the interacting-hand advantage in a visual search paradigm with more controlled stimuli. The present results indicate that the key to understanding the role of attention in person perception is the competitive interaction among objects such as faces, hands, and objects interacting with the person.

The space contraction asymmetry in Michotte’s launching effect

Previous studies have found that, compared with noncausal events, spatial contraction exists between the causal object and the effect object due to the perceived causality. The present research aims to examine whether the causal object and the effect object have the same effect on spatial contraction. A modified launching effect, in which a bar bridges the spatial gap between the final position of the launcher and the initial position of the target, was adopted. Experiment 1 validates the absolute underestimation of the bar’s length between the launcher and the target. Experiment 2a finds that in the direct launching effect, the perceived position of the bar’s trailing edge that was contacted by the final launcher was displaced along the objects’ direction of movement. Meanwhile, the perceived position of the bar’s leading edge that was contacted by the initial target was displaced in opposite direction to the moving direction. The magnitude of the former’s displacement was significantly larger than that of the latter, displaying a significant contraction asymmetry. Experiment 2b demonstrates that the contraction asymmetry did not result from the launcher remaining in contact with the edge of the bar. Experiment 3 indicates that contraction asymmetry showed a type of postdictive effect; that is, to some extent, this asymmetry depends on what happens after contact. In conclusion, the space between the causal object and effect object contracts asymmetrically in the launching effect, which implies that the causal object and effect object are perceived as shifting toward each other nonequidistantly in visual space.

Dwelling on simple stimuli in visual search

Research and theories on visual search often focus on visual guidance to explain differences in search. Guidance is the tuning of attention to target features and facilitates search because distractors that do not show target features can be more effectively ignored (skipping). As a general rule, the better the guidance is, the more efficient search is. Correspondingly, behavioral experiments often interpreted differences in efficiency as reflecting varying degrees of attentional guidance. But other factors such as the time spent on processing a distractor (dwelling) or multiple visits to the same stimulus in a search display (revisiting) are also involved in determining search efficiency. While there is some research showing that dwelling and revisiting modulate search times in addition to skipping, the corresponding studies used complex naturalistic and category-defined stimuli. The present study tests whether results from prior research can be generalized to more simple stimuli, where target-distractor similarity, a strong factor influencing search performance, can be manipulated in a detailed fashion. Thus, in the present study, simple stimuli with varying degrees of target-distractor similarity were used to deliver conclusive evidence for the contribution of dwelling and revisiting to search performance. The results have theoretical and methodological implications: They imply that visual search models should not treat dwelling and revisiting as constants across varying levels of search efficiency and that behavioral search experiments are equivocal with respect to the responsible processing mechanisms underlying more versus less efficient search. We also suggest that eye-tracking methods may be used to disentangle different search components such as skipping, dwelling, and revisiting.

Lost to translation: How design factors of the mouse-tracking procedure impact the inference from action to cognition

From an embodiment perspective, action and cognition influence each other constantly. This interaction has been utilized in mouse-tracking studies to infer cognitive states from movements, assuming a continuous manifestation of cognitive processing into movement. However, it is mostly unknown how this manifestation is affected by the variety of possible design choices in mouse-tracking paradigms. Here we studied how three design factors impact the manifestation of cognition into movement in a Simon task with mouse tracking. We varied the response selection (i.e., with or without clicking), the ratio between hand and mouse cursor movement, and the location of the response boxes. The results show that all design factors can blur or even prevent the manifestation of cognition into movement, as reflected by a reduction in movement consistency and action dynamics, as well as by the adoption of unsuitable movement strategies. We conclude that deliberate and careful design choices in mouse-tracking experiments are crucial to ensuring a continuous manifestation of cognition in movement. We discuss the importance of developing a standard practice in the design of mouse-tracking experiments.

Comparable search efficiency for human and animal targets in the context of natural scenes

In a previous series of studies, we have shown that search for human targets in the context of natural scenes is more efficient than search for mechanical targets. Here we asked whether this search advantage extends to other categories of biological objects. We used videos of natural scenes to directly contrast search efficiency for animal and human targets among biological or nonbiological distractors. In visual search arrays consisting of two, four, six, or eight videos, observers searched for animal targets among machine distractors, and vice versa (Exp. 1). Another group searched for animal targets among human distractors, and vice versa (Exp. 2). We measured search slope as a proxy for search efficiency, and complemented the slope with eye movement measurements (fixation duration on the target, as well as the proportion of first fixations landing on the target). In both experiments, we observed no differences in search slopes or proportions of first fixations between any of the target–distractor category pairs. With respect to fixation durations, we found shorter on-target fixations only for animal targets as compared to machine targets (Exp. 1). In summary, we did not find that the search advantage for human targets over mechanical targets extends to other biological objects. We also found no search advantage for detecting humans as compared to other biological objects. Overall, our pattern of findings suggests that search efficiency in natural scenes, as elsewhere, depends crucially on the specific target–distractor categories.

Is it impossible to acquire absolute pitch in adulthood?

Absolute pitch (AP) refers to the rare ability to name the pitch of a tone without external reference. It is widely believed to be only for the selected few with rare genetic makeup and early musical training during the critical period, and therefore acquiring AP in adulthood is impossible. Previous studies have not offered a strong test of the effect of training because of issues like small sample size and insufficient training. In three experiments, adults learned to name pitches in a computerized, gamified and personalized training protocol for 12 to 40 hours, with the number of pitches gradually increased from three to twelve. Across the three experiments, the training covered different octaves, timbre, and training environment (inside or outside laboratory). AP learning showed classic characteristics of perceptual learning, including generalization of learning dependent on the training stimuli, and sustained improvement for at least one to three months. 14% of the participants (6 out of 43) were able to name twelve pitches at 90% or above accuracy, comparable to that of ‘AP possessors’ as defined in the literature. Overall, AP continues to be learnable in adulthood, which challenges the view that AP development requires both rare genetic predisposition and learning within the critical period. The finding calls for reconsideration of the role of learning in the occurrence of AP, and pushes the field to pinpoint and explain the differences, if any, between the aspects of AP more trainable in adulthood and the aspects of AP that are potentially exclusive for the few exceptional AP possessors observed in the real world.

Time for Action: An Introduction to the Special Issue
Probing early attention following negative and positive templates

In visual search tasks, cues indicating the upcoming distractor color can benefit search performance compared with uninformative cues. However, benefits from these negative cues are consistently smaller than benefits from positive cues (cuing target color), even when both cues are equally informative. This suggests that using a negative template is less effective than using a positive template. Here, we contrast the early attentional effects of negative and positive templates using the letter probe technique. On most trials, participants searched for a shape-defined target after receiving a positive, negative, or neutral color cue. On occasional probe trials, letters briefly appeared on the search items, and participants reported as many letters as possible. Examining the proportion of letters reported on potential targets versus distractors provided a snapshot of attentional allocation at the time of the probe. Across probes at 100, 250, and 400 ms, participants recalled more letters on target-colored objects than letters on distractor-colored objects following both negative and positive cues. These cuing benefits on probe report trials were larger at later probe times than early probe times, indicating both types of cues became more effective across time. Importantly, negative cue probe benefits were consistently smaller than positive cue benefits. Finally, following an extremely short probe (25 ms), we found no RT benefit following negative cues as well as no evidence that negatively cued items capture attention. These results help explain the previously reported differences in RT benefit following positive and negative cues, and support the idea of early active attentional suppression.

Embodied gestalts: Unstable visual phenomena become stable when they are stimuli for competitive action selection

An animal’s environment is rich with affordances. Different possible actions are specified by visual information while competing for dominance over neural dynamics. Affordance competition models account for this in terms of winner-takes-all cross-inhibition dynamics. Multistable phenomena also reveal how the visual system deals with ambiguity. Their key property is spontaneous instability, in forms such as alternating dominance in binocular rivalry. Theoretical models of self-inhibition or self-organized instability posit that the instability is tied to some kind of neural adaptation and that its functional significance is to enable flexible perceptual transitions. We hypothesized that the two perspectives are interlinked. Spontaneous instability is an intrinsic property of perceptual systems, but it is revealed when they are stripped from the constraints of possibilities for action. To test this, we compared a multistable gestalt phenomenon against its embodied version and estimated the neural adaptation and competition parameters of an affordance transition dynamic model. Wertheimer’s (Zeitschrift fur Psychologie 61, 161–265, 1912) optimal (β) and pure (φ) forms of apparent motion from a stroboscopic point-light display were endowed with action relevance by embedding the display in a visual object-tracking task. Thus, each mode was complemented by its action, because each perceptual mode uniquely enabled different ways of tracking the target. Perceptual judgment of the traditional apparent motion exhibited spontaneous instabilities, in the form of earlier switching when the frame rate was changed stepwise. In contrast, the embodied version exhibited hysteresis, consistent with affordance transition studies. Consistent with our predictions, the parameter for competition between modes in the affordance transition model increased, and the parameter for self-inhibition vanished.

The structure of illusory conjunctions reveals hierarchical binding of multipart objects

The world around us is filled with complex objects, full of color, motion, shape, and texture, and these features seem to be represented separately in the early visual system. Anne Treisman pointed out that binding these separate features together into coherent conscious percepts is a serious challenge, and she argued that selective attention plays a critical role in this process. Treisman also showed that, consistent with this view, outside the focus of attention we suffer from illusory conjunctions: misperceived pairings of features into objects. Here we used Treisman’s logic to study the structure of pre-attentive representations of multipart, multicolor objects, by exploring the patterns of illusory conjunctions that arise outside the focus of attention. We found consistent evidence of some pre-attentive binding of colors to their parts, and weaker evidence of binding multiple colors of the same object. The extent to which such hierarchical binding occurs seems to depend on the geometric structure of multipart objects: Objects whose parts are easier to separate seem to exhibit greater pre-attentive binding. Together, these results suggest that representations outside the focus of attention are not entirely a “shapeless bundles of features,” but preserve some meaningful object structure.

The number of letters in number words influences the response time in numerical comparison tasks: Evidence using Korean number words

Here, we report that the number of letters in number words influences the response time in numerical comparison tasks. In this experiment, a pair of single Korean number words consisting of two or three letters was simultaneously presented in an area of the same size, and the participants reported which was semantically larger. The conditions were categorized as congruent, neutral, and incongruent based on the congruency between the meaning indicated by the numeral (i.e., the size of the number or semantic size) and the number of letters in each number word. In the analysis, compared to the neutral (faster) and incongruent (slowest) conditions, the response time was the fastest under the congruent condition. Thus, the congruency effect is explained by the number of letters rather than continuous visual properties (occupied area and length). These results suggest that the semantic representation of number words is automatically influenced by the number of letters they contain.

The contribution of spatial position and rotated global configuration to contextual cueing

Spatial information can incidentally guide attention to the likely location of a target. This contextual cueing was even observed if only the relative configuration, but not the individual locations of distractor items were repeated or vice versa (Jiang & Wagner in Perception & Psychophysics, 66(3), 454-463, 2004). The present study investigated the contribution of global configuration and individual spatial location to contextual cueing. Participants repeatedly searched 12 visual search displays in a learning session. In a subsequent transfer session, there were four conditions: fully repeated configurations (same as the displays in the learning session), recombined configurations from two learned configurations with the same target location (preserving distractor locations but not configuration), rotated configurations (preserving configuration but not distractor locations), and new configurations. We could show that contextual cueing occurred if only distractor locations or relative configuration, randomly intermixed, was preserved in a single experiment. Beyond replicating the results of Jiang and Wagner, we made an adjustment to a particular type of transformation – that may have occurred in separate experiments – unlikely. Moreover, contextual cueing in rotated configurations showed that repeated configurations can serve as context cues even without preserved azimuth.

What first drives visual attention during the recognition of object-directed actions? The role of kinematics and goal information

The recognition of others’ object-directed actions is known to involve the decoding of both the visual kinematics of the action and the action goal. Yet whether action recognition is first guided by the processing of visual kinematics or by a prediction about the goal of the actor remains debated. In order to provide experimental evidence to this issue, the present study aimed at investigating whether visual attention would be preferentially captured by visual kinematics or by action goal information when processing others’ actions. In a visual search task, participants were asked to find correct actions (e.g., drinking from glass) among distractor actions. Distractors actions contained grip and/or goal violations and could therefore share the correct goal and/or the correct grip with the target. The time course of fixation proportion on each distractor action has been taken as an indicator of visual attention allocation. Results show that visual attention is first captured by the distractor action with similar goal. Then the withdrawal of visual attention from the action distractor with similar goal suggests a later attentional capture by the action distractor with similar grip. Overall, results are in line with predictive approaches of action understanding, which assume that observers first make a prediction about the actor’s goal before verifying this prediction using the visual kinematics of the action.

Correction to: Detecting distortions of peripherally presented letter stimuli under crowded conditions
We discovered an error in the implementation of the function used to generate radial frequency (RF) distortions1 in our article (Wallis, Tobias, Bethge, & Wichmann, 2017).
The gist of Anne Treisman’s revolution

Anne Treisman investigated many aspects of perception, and in particular the roles of different forms of attention. Four aspects of her work are reviewed here, including visual search, set mean perception, perception in special populations, and binocular rivalry. The importance of the breakthrough in each case is demonstrated. Search is easy or slow depending on whether it depends on the application of global or focused attention. Mean perception depends on global attention and affords simultaneous representation of the means of at least two sets of elements, and then of comparing them. Deficits exhibited in Balint’s or unilateral neglect patients identify basic sensory system mechanisms. And, the ability to integrate binocular information for stereopsis despite simultaneous binocular rivalry for color, demonstrates the division of labor underlying visual system computations. All these studies are related to an appreciation of the difference between perceiving the gist of a scene, its elements or objects, versus perceiving the details of the scene and its components. This relationship between Anne Treisman’s revolutionary discoveries and the concept of gist perception is the core of the current review.

Computer mouse tracking reveals motor signatures in a cognitive task of spatial language grounding

In a novel computer mouse tracking paradigm, participants read a spatial phrase such as “The blue item to the left of the red one” and then see a scene composed of 12 visual items. The task is to move the mouse cursor to the target item (here, blue), which requires perceptually grounding the spatial phrase. This entails visually identifying the reference item (here, red) and other relevant items through attentional selection. Response trajectories are attracted toward distractors that share the target color but match the spatial relation less well. Trajectories are also attracted toward items that share the reference color. A competing pair of items that match the specified colors but are in the inverse spatial relation increases attraction over-additively compared to individual items. Trajectories are also influenced by the spatial term itself. While the distractor effect resembles deviation toward potential targets in previous studies, the reference effect suggests that the relevance of the reference item for the relational task, not its role as a potential target, was critical. This account is supported by the strengthened effect of a competing pair. We conclude, therefore, that the attraction effects in the mouse trajectories reflect the neural processes that operate on sensorimotor representations to solve the relational task. The paradigm thus provides an experimental window through motor behavior into higher cognitive function and the evolution of activation in modal substrates, a longstanding topic in the area of embodied cognition.

Task-driven and flexible mean judgment for heterogeneous luminance ensembles

Spatial averaging of luminances over a variegated region has been assumed in visual processes such as light adaptation, texture segmentation, and lightness scaling. Despite the importance of these processes, how mean brightness can be computed remains largely unknown. We investigated how accurately and precisely mean brightness can be compared for two briefly presented heterogeneous luminance arrays composed of different numbers of disks. The results demonstrated that mean brightness judgments can be made in a task-dependent and flexible fashion. Mean brightness judgments measured via the point of subjective equality (PSE) exhibited a consistent bias, suggesting that observers relied strongly on a subset of the disks (e.g., the highest- or lowest-luminance disks) in making their judgments. Moreover, the direction of the bias flexibly changed with the task requirements, even when the stimuli were completely the same. When asked to choose the brighter array, observers relied more on the highest-luminance disks. However, when asked to choose the darker array, observers relied more on the lowest-luminance disks. In contrast, when the task was the same, observers’ judgments were almost immune to substantial changes in apparent contrast caused by changing the background luminance. Despite the bias in PSE, the mean brightness judgments were precise. The just-noticeable differences measured for multiple disks were similar to or even smaller than those for single disks, which suggested a benefit of averaging. These findings implicated flexible weighted averaging; that is, mean brightness can be judged efficiently by flexibly relying more on a few items that are relevant to the task.

Influence of content and intensity of thought on behavioral and pupil changes during active mind-wandering, off-focus, and on-task states

Mind wandering (MW) is a pervasive phenomenon that occurs very frequently, regardless of the task. A content-based definition of MW holds that it occurs when the content of thought switches from an ongoing task and/or an external stimulus-driven event to self-generated or inner thoughts. A recent account suggests that the transition between these different states of attention occurs via an off-focus state. Following this suggestion, previous work relating MW to pupil size might have lumped attentional states that are critically different from each (i.e., off-focus and MW states). In the present study, both behavior and pupil size were measured during a sustained-attention-to-response task, to disentangle the content of thought (on task or MW) from an off-focus state of mind. The off-focus state was operationalized by probing the intensity with which participants were on task or mind-wandering. The results of two experiments showed that the behavioral and phasic pupillary responses were sensitive to changes related to the content of thought. The behavioral responses were furthermore related to the intensity of the thought. However, no clear relation between the different attentional states and tonic pupillary diameter was found, suggesting that it is an unreliable proxy for MW.

Bootstrapping a better slant: A stratified process for recovering 3D metric slant

Lind et al. (Journal of Experimental Psychology: Human Perception and Performance, 40 (1), 83, 2014) proposed a bootstrap process that used right angles on 3D relief structure, viewed over sufficiently large continuous perspective change, to recover the scaling factor for metric shape. Wang, Lind, and Bingham (Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1508-1522, 2018) replicated these results in the case of 3D slant perception. However, subsequent work by the same authors (Wang et al., 2019) suggested that the original solution could be ineffective for 3D slant and presented an alternative that used two equidistant points (a portion of the original right angle). We now describe a three-step stratified process to recover 3D slant using this new solution. Starting with 2D inputs, we (1) used an existing structure-from-motion (SFM) algorithm to derive the object’s 3D relief structure and (2) applied the bootstrap process to it to recover the unknown scaling factor, which (3) was then used to produce a slant estimate. We presented simulations of results from four previous experiments (Wang et al., 2018, 2019) to compare model and human performance. We showed that the stratified process has great predictive power, reproducing a surprising number of phenomena found in human experiments. The modeling results also confirmed arguments made in Wang et al. (2019) that an axis of mirror symmetry in an object allows observers to use the recovered scaling factor to produce an accurate slant estimate. Thus, poor estimates in the context of a lack of symmetry do not mean that the scaling factor has not been recovered, but merely that the direction of slant was ambiguous.

Concurrent evaluation of independently cued features during perceptual decisions and saccadic targeting in visual search

Simultaneous search for one of two targets is slower and less accurate than search for a single target. Within the Signal Detection Theoretic (SDT) framework, this can be attributed to the division of resources during the comparison of visual input against independently cued targets. The current study used one or two cues to elicit single- and dual-target searches for orientation targets among similar and dissimilar distractors. In Experiment 1, the accuracy of target discrimination in brief displays was compared at setsizes of 1, 2 and 4. Results revealed a reduction in accuracy that scaled with the product of set size and the number of cued targets. In Experiment 2, the accuracy and latency of observers’ saccadic targeting were compared. Fixations on single-target searches were highly selective towards the target. On dual-target searches, the requirement to detect one of two targets produced a significant reduction in target fixations and equivalent rates of fixations to distractors with opposite orientations. For most observers, the dual-target cost was predicted by an SDT model that simulated increases in decision-noise and the distribution of capacity-limited resources during the comparison of selected input against independently cued targets. For others, search accuracy was consistent with a single-item limit on perceptual decisions and saccadic targeting during search. These findings support a flexible account of the dual-target cost based on different strategies to resolve competition between independently cued targets.

Symmetry mediates the bootstrapping of 3-D relief slant to metric slant

Empirical studies have always shown 3-D slant and shape perception to be inaccurate as a result of relief scaling (an unknown scaling along the depth direction). Wang, Lind, and Bingham (Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1508–1522, 2018) discovered that sufficient relative motion between the observer and 3-D objects in the form of continuous perspective change (≥45°) could enable accurate 3-D slant perception. They attributed this to a bootstrap process (Lind, Lee, Mazanowski, Kountouriotis, & Bingham in Journal of Experimental Psychology: Human Perception and Performance, 40(1), 83, 2014) where the perceiver identifies right angles formed by texture elements and tracks them in the 3-D relief structure through rotation to extrapolate the unknown scaling factor, then used to convert 3-D relief structure to 3-D Euclidean structure. This study examined the nature of the bootstrap process in slant perception. In a series of four experiments, we demonstrated that (1) features of 3-D relief structure, instead of 2-D texture elements, were tracked (Experiment 1); (2) identifying right angles was not necessary, and a different implementation of the bootstrap process is more suitable for 3-D slant perception (Experiment 2); and (3) mirror symmetry is necessary to produce accurate slant estimation using the bootstrapped scaling factor (Experiments 3 and 4). Together, the results support the hypothesis that a symmetry axis is used to determine the direction of slant and that 3-D relief structure is tracked over sufficiently large perspective change to produce metric depth. Altogether, the results supported the bootstrap process.

Visual objects interact differently during encoding and memory maintenance

The storage mechanisms of working memory are the matter of an ongoing debate. The sensory recruitment hypothesis states that memory maintenance and perceptual encoding rely on the same neural substrate. This suggests that the same cortical mechanisms that shape object perception also apply to maintained memory content. We tested this prediction using the Direction Illusion, i.e., the mutual repulsion of two concurrently visible motion directions. Participants memorized the directions of two random dot patterns for later recall. In Experiments 1 and 2, we varied the temporal separation of spatially distinct stimuli to manipulate perceptual concurrency, while keeping concurrency within working memory constant. We observed mutual motion repulsion only under simultaneous stimulus presentation, but proactive repulsion and retroactive attraction under immediate stimulus succession. At inter-stimulus intervals of 0.5 and 2 s, however, proactive repulsion vanished, while the retroactive attraction remained. In Experiment 3, we presented both stimuli at the same spatial position and observed a reappearance of the repulsion effect. Our results indicate that the repulsive mechanisms that shape object perception across space fade during the transition from a perceptual representation to a consolidated memory content. This suggests differences in the underlying structure of perceptual and mnemonic representations. The persistence of local interactions, however, indicates different mechanisms of spatially global and local feature interactions.

A comparison of simple movement behaviors across three different devices

Reaching trajectories have provided a unique tool to observe changes in internal cognitive decisions. Furthermore, technological advances have made devices for measuring reach movements more accessible and researchers have recognized that various populations including children, elderly populations, and non-human primates can easily execute simple movements as responses. As a result, devices such as a three-dimensional (3D) reach tracker, a stylus, or a computer-mouse have been increasingly utilized to study cognitive processes. However, although the specific type of tracking device that a researcher uses may impact behavior due to the constraints it places on movements, most researchers in these fields are unaware of this potential issue. Here, we examined the potential behavioral impact of using each of these three devices. To induce re-directed movements that mimic the movements that often occur following changes in cognitive states, we used a double-step task in which displacement of an initial target location requires participants to quickly re-direct their movement. We found that reach movement parameters were largely comparable across the three devices. However, hand movements measured by a 3D reach tracker showed earlier reach initiation latencies (relative to stylus movements) and more curved movement trajectories (relative to both mouse and stylus movements). Reach movements were also re-directed following target displacement more rapidly. Thus, 3D reach trackers may be ideal for observing fast, subtle changes in internal decision-making processes compared to other devices. Taken together, this study provides a useful reference for comparing and implementing reaching studies to examine human cognition.

No one knows what attention is

In this article, we challenge the usefulness of “attention” as a unitary construct and/or neural system. We point out that the concept has too many meanings to justify a single term, and that “attention” is used to refer to both the explanandum (the set of phenomena in need of explanation) and the explanans (the set of processes doing the explaining). To illustrate these points, we focus our discussion on visual selective attention. It is argued that selectivity in processing has emerged through evolution as a design feature of a complex multi-channel sensorimotor system, which generates selective phenomena of “attention” as one of many by-products. Instead of the traditional analytic approach to attention, we suggest a synthetic approach that starts with well-understood mechanisms that do not need to be dedicated to attention, and yet account for the selectivity phenomena under investigation. We conclude that what would serve scientific progress best would be to drop the term “attention” as a label for a specific functional or neural system and instead focus on behaviorally relevant selection processes and the many systems that implement them.

Correction to: Visual search asymmetry depends on target-distractor feature similarity: Is the asymmetry simply a result of distractor rejection speed?
In the original version of the published article the stimuli in Table 2 and Figure 2 were displayed incorrectly.
No evidence for an attentional bias towards implicit temporal regularities

Action and perception are optimized by exploiting temporal regularities, and it has been suggested that the attentional system prioritizes information that contains some form of structure. Indeed, Zhao, Al-Aidroos, and Turk-Browne (Psychological Science, 24(5), 667–677, 2013) found that attention was biased towards the location and low-level visual features of shapes that appeared with a regular order but were irrelevant for the main search task. Here, we investigate whether this bias also holds for irrelevant metrical temporal regularities. In six experiments, participants were asked to perform search tasks. In Experiments 1a–d, sequences of squares, each presented at one of four locations, appeared in between the search trials. Crucially, in one location, the square appeared with a regular rhythm, whereas the timing in the other locations was random. In Experiments 2a and 2b, a sequence of centrally presented colored circles was shown in between the search trials, of which one specific color appeared regularly. We expected that, if attention is automatically biased towards these temporal regularities, reaction times would be faster if the target matches the location (Experiments 1a–d) or color (Experiments 2a–b) of the regular stimulus. However, no reaction time benefit was observed for these targets, suggesting that there was no attentional bias towards the regularity. In addition, we found no evidence for attentional entrainment to the rhythmic stimulus. These results suggest that people do not use implicit rhythmic temporal regularities to guide their attention in the same way as they use order regularities.

Preview of partial stimulus information in search prioritizes features and conjunctions, not locations

Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.

To quit or not to quit in dynamic search

Searching for targets among similar distractors requires more time as the number of items increases, with search efficiency measured by the slope of the reaction-time (RT)/set-size function. Horowitz and Wolfe (Nature, 394(6693), 575–577, 1998) found that the target-present RT slopes were as similar for “dynamic” as for standard static search, even though the items were randomly reshuffled every 110 ms in dynamic search. Somewhat surprisingly, attempts to understand dynamic search have ignored that the target-absent RT slope was as low (or “flat”) as the target-present slope—so that the mechanisms driving search performance under dynamic conditions remain unclear. Here, we report three experiments that further explored search in dynamic versus static displays. Experiment 1 confirmed that the target-absent:target-present slope ratio was close to or smaller than 1 in dynamic search, as compared with being close to or above 2 in static search. This pattern did not change when reward was assigned to either correct target-absent or correct target-present responses (Experiment 2), or when the search difficulty was increased (Experiment 3). Combining analysis of search sensitivity and response criteria, we developed a multiple-decisions model that successfully accounts for the differential slope patterns in dynamic versus static search. Two factors in the model turned out to be critical for generating the 1:1 slope ratio in dynamic search: the “quit-the-search” decision variable accumulated based upon the likelihood of “target absence” within each individual sample in the multiple-decisions process, whilst the stopping threshold was a linear function of the set size and reward manipulation.

Center bias outperforms image salience but not semantics in accounting for attention during scene viewing

How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled’ to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743–747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model’s center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.

Interference of irrelevant information in multisensory selection depends on attentional set

In the multisensory world in which we live, certain objects and events are of more relevance than others. In the laboratory, this broadly equates to the distinction between targets and distractors. In selection situations like the flanker task, the evidence suggests that the processing of multisensory distractors is influenced by attention. Here, multisensory distractor processing was investigated by modulating attentional set in three experiments in a flanker interference task, in which the targets were unisensory while the distractors were multisensory. Attentional set was modulated by making the target modality either predictable or unpredictable (Experiments 1 vs. 2, respectively). In Experiment 3, this manipulation was implemented on a within-experiment basis. Furthermore, the third experiment compared audiovisual distractors (used in all experiments) with distractors with one feature in a neutral modality (i.e., touch), that never appeared as the target modality in the flanker task. The results demonstrate that there was no interference from the response-compatible crossmodal distractor feature when the target modality was predictable (i.e., blocked). However, when the modality was varied on a trial-by-trial basis, this crossmodal feature significantly influenced information processing. By contrast, a multisensory distractor with a neutral crossmodal feature never influenced behavior. This finding suggests that the processing of multisensory distractors depends on attentional set. When the target modality varies randomly, participants include features from both modalities in their attentional set and the irrelevant crossmodal feature, now part of the set, influences information processing. In contrast, interference from the crossmodal distractor feature does not occur when it is not part of the attentional set.

Correction to: How to correctly put the “subsequent” in subsequent search miss errors
The following formatting changes to the figures and table need to be made in order to enhance readability.
Evidence for early top-down modulation of attention to salient visual cues through probe detection

The influence of top-down attentional control on the selection of salient visual stimuli has been examined extensively. Some accounts suggest all salient stimuli capture attention in a stimulus-driven manner, while others suggest salient stimuli capture attention contingent on top-down relevance. Evidence consistently shows target templates allow only salient stimuli sharing a target’s features to capture attention, while salient stimuli not sharing a target’s features do not. A number of hypotheses (e.g., contingent orienting, disengagement, signal suppression) from both sides of this debate have been proposed; however, most predict similar performance in the visual search and spatial cuing tasks. The present study combined a cuing task, in which subjects identified a target defined by its having a unique feature, with a probe identification task developed by Gaspelin, Leonard, and Luck (Psychological Science, 26, 1740-1750, 2015), in which subjects identified letters appearing in potential target locations just after the appearance of a salient cue that matched or did not match the target-defining feature. The probe task provided a measure of where attention was focused just after the cue’s appearance. In six experiments, we observed top-down modulation of spatial cuing effects in response times and probe identification: Probes in the cued location were identified more often, but more when preceded by a cue that shared the target-defining feature. Though not unequivocal, the results are explained in terms of the on-going debate over whether top-down attentional control can prevent bottom-up capture by salient, task-irrelevant stimuli.

Constancy bias: When we “fill in the blanks” of unattended or forgotten stimuli

Our ability to form predictions about the behavior of objects outside our focus of attention and to recognize when those expectations have been violated is critical to our survival. One principle that greatly influences our beliefs about unattended stimuli is that of constancy, or the tendency to assume objects outside our attention have remained constant, and the next time we attend to them they will be unchanged. Although this phenomenon is familiar from research on inattentional blindness, it is currently unclear when constancy is assumed and what conditions are adequate to convince us that unattended stimuli have likely undergone a change while outside of our attentional spotlight. Using a simple change-detection task, we sought to show that unattended stimuli are strongly predisposed to be perceived as unchanging when presented on constant, unchanging backgrounds; however, when stimuli were presented with significant incidental visual activity, participants were no longer biased towards change rejection. We found that participants were far more likely to report that a change had occurred if target presentation was accompanied by salient, incidental visual activity. We take these results to indicate that when an object is not represented in working memory, we use environmental conditions to judge whether or not these items are likely to have undergone a change or remained constant.

Axis of rotation as a basic feature in visual search

Searching for a “Q” among “O”s is easier than the opposite search (Treisman & Gormican in Psychological Review, 95, 15–48, 1988). In many cases, such “search asymmetries” occur because it is easier to search when a target is defined by the presence of a feature (i.e., the line terminator defining the tail of the “Q”), rather than by its absence. Treisman proposed that features that produce a search asymmetry are “basic” features in visual search (Treisman & Gormican in Psychological Review, 95, 15–48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285–310, 1985). Other stimulus attributes, such as color, orientation, and motion, have been found to produce search asymmetries (Dick, Ullman, & Sagi in Science, 237, 400–402, 1987; Treisman & Gormican in Psychological Review, 95, 15–48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285–310, 1985). Other stimulus properties, such as facial expression, produce asymmetries because one type of item (e.g., neutral faces) demands less attention in search than another (e.g., angry faces). In the present series of experiments, search for a rolling target among spinning distractors proved to be more efficient than searching for a spinning target among rolling distractors. The effect does not appear to be due to differences in physical plausibility, direction of motion, or texture movement. Our results suggest that the spinning stimuli demand less attention, making search through spinning distractors for a rolling target easier than the opposite search.

Perception it is: Processing level in multisensory selection

When repeatedly exposed to simultaneously presented stimuli, associations between these stimuli are nearly always established, both within as well as between sensory modalities. Such associations guide our subsequent actions and may also play a role in multisensory selection. Thus, crossmodal associations (i.e., associations between stimuli from different modalities) learned in a multisensory interference task might affect subsequent information processing. The aim of this study was to investigate the processing level of multisensory stimuli in multisensory selection by means of crossmodal aftereffects. Either feature or response associations were induced in a multisensory flanker task while the amount of interference in a subsequent crossmodal flanker task was measured. The results of Experiment 1 revealed the existence of crossmodal interference after multisensory selection. Experiments 2 and 3 then went on to demonstrate the dependence of this effect on the perceptual associations between features themselves, rather than on the associations between feature and response. Establishing response associations did not lead to a subsequent crossmodal interference effect (Experiment 2), while stimulus feature associations without response associations (obtained by changing the response effectors) did (Experiment 3). Taken together, this pattern of results suggests that associations in multisensory selection, and the interference of (crossmodal) distractors, predominantly work at the perceptual, rather than at the response, level.

Distractor familiarity reveals the importance of configural information in musical notation

The study of perceptual expertise in a visual domain requires the definition of boundaries for the objects that are part of the domain in question. Unlike other well-studied domains, such as faces or words, the domain of musical notation has been lacking in efforts to identify critical features that define the objects of music reading. In the present study, we took advantage of the distractor familiarity effect in visual search. We asked participants to search for a prespecified target note among familiar/unfamiliar distractor notes when two features of musical notation, dot–stem configuration (the way of connecting the dot and the stem of a note) and connectedness (whether or not the dot and the stem of a note were connected), were manipulated. A participant’s level of music-reading expertise predicted the magnitude of the distractor familiarity effect only when the dot–stem configuration was diagnostic for the search. Connectedness did not induce a distractor familiarity effect, regardless of its diagnosticity. Dot–stem configuration is a defining feature of music notes, helping to characterize the boundaries of the domain of music-reading expertise. This work has also improved on the tasks used to quantify expertise in reading musical notation.

Attention and binding in visual working memory: Two forms of attention and two kinds of buffer storage

We review our research on the episodic buffer in the multicomponent model of working memory (Baddeley, 2000), making explicit the influence of Anne Treisman’s work on the way our research has developed. The crucial linking theme concerns binding, whereby the individual features of an episode are combined as integrated representations. We summarize a series of experiments on visual working memory that investigated the retention of feature bindings and individual features. The effects of cognitive load, perceptual distraction, prioritization, serial position, and their interactions form a coherent pattern. We interpret our findings as demonstrating contrasting roles of externally driven and internally driven attentional processes, as well as a distinction between visual buffer storage and the focus of attention. Our account has strong links with Treisman’s concept of focused attention and aligns with a number of contemporary approaches to visual working memory.

Biological motion and animacy belief induce similar effects on involuntary shifts of attention

Biological motion is salient to the human visual and motor systems and may be intrinsic to the perception of animacy. Evidence for the salience of visual stimuli moving with trajectories consistent with biological motion comes from studies showing that such stimuli can trigger shifts of attention in the direction of that motion. The present study was conducted to determine whether or not top-down beliefs about animacy can modify the salience of a nonbiologically moving stimulus to the visuomotor system. A nonpredictive cuing task was used in which a white dot moved from a central location toward a left- or right-sided target placeholder. The target randomly appeared at either location 200, 600, or 1,300 ms after the motion onset. Five groups of participants experienced different stimulus conditions: (1) biological motion, (2) inverted biological motion, (3) nonbiological motion, (4) animacy belief (paired with nonbiological motion), and (5) computer-generated belief (paired with nonbiological motion). Analysis of response times revealed that the motion in the biological motion and animacy belief groups, but not in the inverted and nonbiological motion groups, affected processing of the target information. These findings indicate that biological motion is salient to the visual system and that top-down beliefs regarding the animacy of the stimulus can tune the visual and motor systems to increase the salience of nonbiological motion.

Visual working memory load does not eliminate visuomotor repetition effects

When we respond to a stimulus, our ability to quickly execute this response depends on how combinations of stimulus and response features match to previous combinations of stimulus and response features. Some kind of memory representations must be underlying these visuomotor repetition effects. In this paper, we tested the hypothesis that visual working memory stores the stimulus information that gives rise to these effects. Participants discriminated the colors of successive stimuli while holding either three locations or colors in visual working memory. If visual working memory maintains the information about a previous event that leads to visuomotor repetition effects, then occupying working memory with colors or locations should selectively disrupt color–response and location–response repetition effects. The results of two experiments showed that neither color nor spatial memory load eliminated visuomotor repetition effects. Since working memory load did not disrupt repetition effects, it is unlikely that visual working memory resources are used to store the information that underlies visuomotor repetitions effects. Instead, these results are consistent with the view that visuomotor repetition effects stem from automatic long-term memory retrieval, but can also be accommodated by supposing separate buffers for visual working memory and response selection.

Cross-modal correspondences in sine wave: Speech versus non-speech modes

The present study aimed to investigate whether or not the so-called “bouba-kiki” effect is mediated by speech-specific representations. Sine-wave versions of naturally produced pseudowords were used as auditory stimuli in an implicit association task (IAT) and an explicit cross-modal matching (CMM) task to examine cross-modal shape-sound correspondences. A group of participants trained to hear the sine-wave stimuli as speech was compared to a group that heard them as non-speech sounds. Sound-shape correspondence effects were observed in both groups and tasks, indicating that speech-specific processing is not fundamental to the “bouba-kiki” phenomenon. Effects were similar across groups in the IAT, while in the CMM task the speech-mode group showed a stronger effect compared with the non-speech group. This indicates that, while both tasks reflect auditory-visual associations, only the CMM task is additionally sensitive to associations involving speech-specific representations.

Crossing event boundaries changes prospective perceptions of temporal length and proximity

We conducted two experiments to investigate how crossing a single naturalistic event boundary impacted two different types of temporal estimation involving the same target duration – one where participants directly compared marked temporal durations and another where they judged the temporal proximity of stimuli. In Experiment 1, participants judged whether time intervals presented during movies of everyday events were shorter or longer than a previously encoded 5-s reference interval. We examined how the presence of a transition between events (event boundary) in the movie influenced people’s judgments about the length of the comparison interval. Comparison intervals presented during a portion of the movie containing an event boundary were judged as shorter than the reference interval more often than comparison intervals that contained no boundary. Working-memory updating at the event boundary may have directed attention away from the concurrent timing task. In Experiment 2, participants judged whether the second of three tones presented during everyday movies was closer to the first or the third tone presented. Tones separated by an event boundary were judged as farther apart than tones contained within the same event. When judging temporal proximity, attention directed to processing information at an event boundary between two stimuli may disrupt the formation of temporal associations between those stimuli. Overall, these results demonstrate that crossing a single event boundary can impact people’s prospective perceptions of the temporal characteristics of their experience and suggest that the episodic memory updating that occurs during an event boundary both captures timing-relevant attentional resources and plays a role in the temporal binding of information.

Can the diffuseness of sound sources in an auditory scene alter speech perception?

When amplification is used, sound sources are often presented over multiple loudspeakers, which can alter their timbre, and introduce comb-filtering effects. Increasing the diffuseness of a sound by presenting it over spatially separated loudspeakers might affect the listeners’ ability to form a coherent auditory image of it, alter its perceived spatial position, and may even affect the extent to which it competes for the listener’s attention. In addition, it can lead to comb-filtering effects that can alter the spectral profiles of sounds arriving at the ears. It is important to understand how these changes affect speech perception. In this study, young adults were asked to repeat nonsense sentences presented in either noise, babble, or speech. Participants were divided into two groups: (1) A Compact-Target Timbre group where the target sentences were presented over a single loudspeaker (compact target), while the masker was either presented over three loudspeakers (diffuse) or over a single loudspeaker (compact); (2) A Diffuse-Target Timbre group, where the target sentences were diffuse while the masker was either compact or diffuse. Timbre had no significant effect in the absence of a timbre contrast between target and masker. However, when there was a timbre contrast, the signal-to-noise ratios needed for 50% correct recognition of the target speech were higher (worse) when the masker was compact, and lower (better) when the target was compact. These results were consistent with the expected effects from comb filtering, and could also reflect a tendency for attention to be drawn towards compact sound sources.

Establishing a role for the visual complexity of linguistic stimuli in age-related reading difficulty: Evidence from eye movements during Chinese reading

Older adults experience greater difficulty compared to young adults during both alphabetic and nonalphabetic reading. However, while this age-related reading difficulty may be attributable to visual and cognitive declines in older adulthood, the underlying causes remain unclear. With the present research, we focused on effects related to the visual complexity of written language. Chinese is ideally suited to investigating such effects, as characters in this logographic writing system can vary substantially in complexity (in terms of their number of strokes, i.e., lines and dashes) while always occupying the same square area of space, so that this complexity is not confounded with word length. Nonreading studies suggests older adults have greater difficulty than young adults when recognizing characters with high compared to low numbers of strokes. The present research used measures of eye movements to investigate adult age differences in these effects during natural reading. Young adult (18–28 years) and older adult (65+ years) participants read sentences that included one of a pair of two-character target words matched for lexical frequency and contextual predictability, but composed of either high-complexity (>9 strokes) or low-complexity (≤7 strokes) characters. Typical patterns of age-related reading difficulty were observed. However, an effect of visual complexity in reading times for words was greater for the older than for the younger adults, due to the older readers experiencing greater difficulty identifying words containing many rather than few strokes. We interpret these findings in terms of the influence of subtle deficits in visual abilities on reading capabilities in older adulthood.

Mechanisms of contextual cueing: A tutorial review

Repeated contexts yield faster response time in visual search, compared with novel contexts. This effect is known as contextual cueing. Despite extensive study over the past two decades, there remains a spirited debate over whether repeated displays expedite search before the target is found (early locus) or facilitate response after the target is found (late locus). Here, we provide a tutorial review of contextual cueing, with a focus on assessing the locus of the effect. We evaluate the evidence from psychophysics, EEG, and eye tracking. Existing studies support an early locus of contextual cueing, consistent with attentional guidance accounts. Evidence for a late locus exists, though it is less conclusive. Existing literature also highlights a distinction between habit-guided attention learned through experience and changes in spatial priority driven by task goals and stimulus salience.

Preserved tactile acuity in older pianists

A previous study from our lab demonstrated retention of high tactile acuity throughout the lifespan in blind subjects in contrast to the typical decline found for sighted subjects (Legge, Madison, Vaughn, Cheong & Miller, Percept Psychophys, 70 (8), 1471-1488, 2008). We hypothesize that preserved tactile acuity in old age is due to lifelong experience with focused attention to touch and not to blindness per se. Proficient pianists devote attention to touch – fingerings and dynamics – over years of practice. To test our hypothesis, we measured tactile acuity in groups of ten young (mean age 24.5 years) and 11 old (mean age 64.7 years) normally sighted pianists and compared their results to the blind and sighted subjects in our 2008 study. The pianists, like the subjects in 2008, were tested on two tactile-acuity charts requiring active touch, one composed of embossed Landolt rings and the other composed of dot patterns similar to braille. For both tests, the pianists performed more like the blind subjects than the sighted subjects from our 2008 study. For the ring chart, there was no significant difference in tactile acuity between the young and old pianists and no significant difference between the pianists and the blind subjects. For the dot chart, the pianists showed an age-related decline in tactile acuity, but not as severe as the sighted subjects from 2008. Our results are consistent with the hypothesis that lifelong experience with focused attention to touch acts to preserve tactile acuity into old age for both blind and sighted subjects.

Higher attentional costs for numerosity estimation at high densities

Humans can estimate numerosity over a large range, but the precision with which they do so varies considerably over that range. For very small sets, within the subitizing range of up to about four items, estimation is rapid and errorless. For intermediate numerosities, errors vary directly with the numerosity, following Weber’s law, but for very high numerosities, with very dense patterns, thresholds continue to rise with the square root of numerosity. This suggests that three different mechanisms operate over the number range. In this study we provide further evidence for three distinct numerosity mechanisms, by studying their dependence on attentional resources. We measured discrimination thresholds over a wide range of numerosities, while manipulating attentional load with both visual and auditory dual tasks. The results show that attentional effects on thresholds vary over the number range. Both visual and auditory attentional loads strongly affect subitizing, much more than for larger numerosities. Attentional costs remain stable over the estimation range, then rise again for very dense patterns. These results reinforce the idea that numerosity is processed by three separates but probably overlapping systems.

Change detection for real-world objects in perihand space

Recent evidence has demonstrated that observers experience visual-processing biases in perihand space that may be tied to the hands’ relevance for grasping actions. Our previous work suggested that when the hands are positioned to afford a power-grasp action, observers show increased temporal sensitivity that could aid with fast and forceful action, whereas when the hands are instead at the ready to perform a precision-grasp action, observers show enhanced spatial sensitivity that benefits delicate and detail-oriented actions. In the present investigation we seek to extend these previous findings by examining how object affordances may interact with hand positioning to shape visual biases in perihand space. Across three experiments, we examined how long participants took to perform a change detection task on photos of real objects, while we manipulated hand position (near/far from display), grasp posture (power/precision), and change type (orientation/identity). Participants viewed objects that afforded either a power grasp or a precision grasp, or were ungraspable. Although we were unable to uncover evidence of altered vision in perihand space in our first experiment, mirroring previous findings, in Experiments 2 and 3 our participants showed grasp-dependent biases near the hands when detecting changes to target objects that afforded a power grasp. Interestingly, ungraspable target objects were not subject to the same perihand space biases. Taken together, our results suggest that the influence of hand position on change detection performance is mediated not only by the hands’ grasp posture, but also by a target object’s affordances for grasping.

Media multitasking, mind-wandering, and distractibility: A large-scale study

Previous studies suggest that frequent media multitasking – the simultaneous use of different media at the same time – may be associated with increased susceptibility to internal and external sources of distraction. At the same time, other studies found no evidence for such associations. In the current study, we report the results of a large-scale study (N=261) in which we measured media multitasking with a short media-use questionnaire and measured distraction with a change-detection task that included different numbers of distractors. To determine whether internally generated distraction affected performance, we deployed experience-sampling probes during the change-detection task. The results showed that participants with higher media multitasking scores did not perform worse as distractor set size increased, they did not perform worse in general, and their responses on the experience-sampling probes made clear that they also did not experience more lapses of attention during the task. Critically, these results were robust across different methods of analysis (i.e., Linear Mixed Modeling, Bayes factors, and extreme-groups comparison). At the same time, our use of the short version of the media-use questionnaire might limit the generalizability of our findings. In light of our results, we suggest that future studies should ensure an adequate level of statistical power and implement a more precise measure for media multitasking.

The effect of emotional primes on attentional focus in high versus low depression

The effect of negative emotional stimuli on attentional focus is unclear. While a number of studies suggest that negative emotional stimuli improve attention, other studies show the opposite effect—namely, that negative emotional stimuli can impair attention and, specifically, attentional focus. It has been suggested that the detrimental effect of negative stimuli on attention is caused by attentional capture and difficulties in disengaging from the stimuli, an effect that is known to be stronger in depressed individuals. In the current study, we aimed to investigate the effect of negative primes on attentional focus as a function of levels of depression. Sixty-seven participants completed the attentional focus task, with either a neutral or a negative emotional prime preceding each trial. Results showed that attentional focus is improved in negative conditions, but that this effect is contingent upon levels of depression: While there is almost no effect of emotion on individuals with low levels of depression, there is a robust effect on individuals with high levels of depression. These results shed light on the process through which individuals with high levels of depression excessively focus on negative information, while simultaneously dismissing neutral information—a crucial part of the vicious cycle of negative mood and depression. Potential clinical implications are discussed.

Frühere Ausgaben ausblenden
Druckausgabe nutzen
Zu diesem Eintrag wird kein Absprung zu einer Onlineresource bereitgestellt.

Sollte eine Druckausgabe vorhanden sein, so wird empfohlen diese zu verwenden.
Änderungen bestätigen

message.confirm.general

Der Leseliste hinzufügen
Sie sind nicht eingeloggt. Gespeicherte Artikel können verloren gehen.
Der Leseliste hinzufügen
    Neuen Alert anlegen
    Ihr Upload wird ausgeführt
    Empfehlen
    Der Paperboy Server ist momentan nicht erreichbar.
    Bitte versuchen Sie später erneut.
    Zeitüberschreitung
    In wenigen Sekunden werden Sie aus Sicherheitsgründen automatisch abgemeldet.
    Mit dem Schließen dieses Dialoges wird dieser Prozess unterbrochen.
    Benachrichtigung

    Fehler aufgetreten
    EU-Datenschutzgrundverordnung

    Die DSGVO stärkt die Datenschutzrechte europaweit für uns alle. Bei vub haben wir aus diesem Anlass unsere Datenschutzerklärung grundlegend verändert:

    • Wir stellen noch übersichtlicher dar, wie und wofür wir personenbezogene Daten verarbeiten (wenn überhaupt, denn das Verwerten Ihrer persönlichen Daten ist überhaupt nicht unser Geschäft!).
    • Wir informieren in unserer neuen Datenschutzerklärung über die Kundenrechte.
    • Wir haben die Datenschutzerklärung übersichtlicher gestaltet.
    • Ab dem 25. Mai 2018 können Sie in Ihrem Kundenkonto unter „meine Einstellungen“ den gewünschten Datenschutz selbst einstellen.

    Bei Fragen wenden Sie sich bitte jederzeit an unseren vub-Kundenservice und Ihre bekannten Ansprechpartner unter premiumservice@vub.de.

    Zugangsdaten