30 resultados para Visual perception.
Resumo:
Purpose. Some children with visual stress and/or headaches have fewer symptoms when wearing colored lenses. Although subjective reports of improved perception exist, few objective correlates of these effects have been established. Methods. In a pilot study, 10 children who wore Intuitive Colorimeter lenses, and claimed benefit, and two asymptomatic children were tested. Steady-state potentials were measured in response to low contrast patterns modulating at a frequency of 12 Hz. Four viewing conditions were compared: 1) no lens; 2) Colorimeter lens; 3) lens of complementary color; and 4) spectrally neutral lens with similar photopic transmission. Results. The asymptomatic children showed little or no difference between the lens and no lens conditions. When all the symptomatic children were tested together, a similar result was found. However, when the symptomatic children were divided into two groups depending on their symptoms, an interaction emerged. Children with visual stress but no headaches showed the largest amplitude visual evoked potential response in the no lens condition, whereas those children whose symptoms included severe headaches or migraine showed the largest amplitude visual evoked potential response when wearing their prescribed lens. Conclusions. The results suggest that it is possible to measure objective correlates of the beneficial subjective perceptual effects of colored lenses, at least in some children who have a history of migraine or severe headaches.
Resumo:
Visual control of locomotion is essential for most mammals and requires coordination between perceptual processes and action systems. Previous research on the neural systems engaged by self-motion has focused on heading perception, which is only one perceptual subcomponent. For effective steering, it is necessary to perceive an appropriate future path and then bring about the required change to heading. Using function magnetic resonance imaging in humans, we reveal a role for the parietal eye fields (PEFs) in directing spatially selective processes relating to future path information. A parietal area close to PEFs appears to be specialized for processing the future path information itself. Furthermore, a separate parietal area responds to visual position error signals, which occur when steering adjustments are imprecise. A network of three areas, the cerebellum, the supplementary eye fields, and dorsal premotor cortex, was found to be involved in generating appropriate motor responses for steering adjustments. This may reflect the demands of integrating visual inputs with the output response for the control device.
Resumo:
Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the “far road” and “near road” mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants were required to gauge their current direction of travel (rather than directly control it). During forward egomotion, the distant road edges provided future path information, which participants used to improve their heading judgments. During backward egomotion, the road edges did not enhance performance because they no longer provided prospective information. This behavioral dissociation was reflected at the neural level, where only simulated forward travel increased activation in a region of the superior parietal lobe and the medial intraparietal sulcus. Providing only near road information during a forward heading judgment task resulted in activation in the motion complex. We propose a complementary role for the posterior parietal cortex and motion complex in detecting future path information and maintaining current lane positioning, respectively. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Resumo:
Several theories of the mechanisms linking perception and action require that the links are bidirectional, but there is a lack of consensus on the effects that action has on perception. We investigated this by measuring visual event-related brain potentials to observed hand actions while participants prepared responses that were spatially compatible (e.g., both were on the left side of the body) or incompatible and action type compatible (e.g., both were finger taps) or incompatible, with observed actions. An early enhanced processing of spatially compatible stimuli was observed, which is likely due to spatial attention. This was followed by an attenuation of processing for both spatially and action type compatible stimuli, likely to be driven by efference copy signals that attenuate processing of predicted sensory consequences of actions. Attenuation was not response-modality specific; it was found for manual stimuli when participants prepared manual and vocal responses, in line with the hypothesis that action control is hierarchically organized. These results indicate that spatial attention and forward model prediction mechanisms have opposite, but temporally distinct, effects on perception. This hypothesis can explain the inconsistency of recent findings on action-perception links and thereby supports the view that sensorimotor links are bidirectional. Such effects of action on perception are likely to be crucial, not only for the control of our own actions but also in sociocultural interaction, allowing us to predict the reactions of others to our own actions.
Resumo:
Embodied theories of cognition propose that neural substrates used in experiencing the referent of a word, for example perceiving upward motion, should be engaged in weaker form when that word, for example ‘rise’, is comprehended. Motivated by the finding that the perception of irrelevant background motion at near-threshold, but not supra-threshold, levels interferes with task execution, we assessed whether interference from near-threshold background motion was modulated by its congruence with the meaning of words (semantic content) when participants completed a lexical decision task (deciding if a string of letters is a real word or not). Reaction times for motion words, such as ‘rise’ or ‘fall’, were slower when the direction of visual motion and the ‘motion’ of the word were incongruent — but only when the visual motion was at nearthreshold levels. When motion was supra-threshold, the distribution of error rates, not reaction times, implicated low-level motion processing in the semantic processing of motion words. As the perception of near-threshold signals is not likely to be influenced by strategies, our results support a close contact between semantic information and perceptual systems.
Resumo:
Perception of our own bodies is based on integration of visual and tactile inputs, notably by neurons in the brain’s parietal lobes. Here we report a behavioural consequence of this integration process. Simply viewing the arm can speed up reactions to an invisible tactile stimulus on the arm. We observed this visual enhancement effect only when a tactile task required spatial computation within a topographic map of the body surface and the judgements made were close to the limits of performance. This effect of viewing the body surface was absent or reversed in tasks that either did not require a spatial computation or in which judgements were well above performance limits. We consider possible mechanisms by which vision may influence tactile processing.
Resumo:
Background: Word deafness is a rare condition where pathologically degraded speech perception results in impaired repetition and comprehension but otherwise intact linguistic skills. Although impaired linguistic systems in aphasias resulting from damage to the neural language system (here termed central impairments), have been consistently shown to be amenable to external influences such as linguistic or contextual information (e.g. cueing effects in naming), it is not known whether similar influences can be shown for aphasia arising from damage to a perceptual system (here termed peripheral impairments). Aims: This study aimed to investigate the extent to which pathologically degraded speech perception could be facilitated or disrupted by providing visual as well as auditory information. Methods and Procedures: In three word repetition tasks, the participant with word deafness (AB) repeated words under different conditions: words were repeated in the context of a pictorial or written target, a distractor (semantic, unrelated, rhyme or phonological neighbour) or a blank page (nothing). Accuracy and error types were analysed. Results: AB was impaired at repetition in the blank condition, confirming her degraded speech perception. Repetition was significantly facilitated when accompanied by a picture or written example of the word and significantly impaired by the presence of a written rhyme. Errors in the blank condition were primarily formal whereas errors in the rhyme condition were primarily miscues (saying the distractor word rather than the target). Conclusions: Cross-modal input can both facilitate and further disrupt repetition in word deafness. The cognitive mechanisms behind these findings are discussed. Both top-down influence from the lexical layer on perceptual processes as well as intra-lexical competition within the lexical layer may play a role.
Resumo:
When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This novel effect, which we call “high phi”, challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit. Our experiments with transients such as texture randomization or contrast reversal show that the magnitude of the jump depends on spatial frequency and transient duration, but not on the speed of the inducing motion signals, and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond dmax, a breakdown of coherent motion perception is expected, but in the presence of an inducer observers again perceive coherent displacements at or just above dmax. In sum, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion, as suggested by the minimal-motion principle, observers perceive jumps whose amplitude closely follows their own dmax limits.
Resumo:
It is now established that native language affects one's perception of the world. However, it is unknown whether this effect is merely driven by conscious, language-based evaluation of the environment or whether it reflects fundamental differences in perceptual processing between individuals speaking different languages. Using brain potentials, we demonstrate that the existence in Greek of 2 color terms—ghalazio and ble—distinguishing light and dark blue leads to greater and faster perceptual discrimination of these colors in native speakers of Greek than in native speakers of English. The visual mismatch negativity, an index of automatic and preattentive change detection, was similar for blue and green deviant stimuli during a color oddball detection task in English participants, but it was significantly larger for blue than green deviant stimuli in native speakers of Greek. These findings establish an implicit effect of language-specific terminology on human color perception.
Resumo:
The validity of the linguistic relativity principle continues to stimulate vigorous debate and research. The debate has recently shifted from the behavioural investigation arena to a more biologically grounded field, in which tangible physiological evidence for language effects on perception can be obtained. Using brain potentials in a colour oddball detection task with Greek and English speakers, a recent study suggests that language effects may exist at early stages of perceptual integration [Thierry, G., Athanasopoulos, P., Wiggett, A., Dering, B., & Kuipers, J. (2009). Unconscious effects of language-specific terminology on pre-attentive colour perception. Proceedings of the National Academy of Sciences, 106, 4567–4570]. In this paper, we test whether in Greek speakers exposure to a new cultural environment (UK) with contrasting colour terminology from their native language affects early perceptual processing as indexed by an electrophysiological correlate of visual detection of colour luminance. We also report semantic mapping of native colour terms and colour similarity judgements. Results reveal convergence of linguistic descriptions, cognitive processing, and early perception of colour in bilinguals. This result demonstrates for the first time substantial plasticity in early, pre-attentive colour perception and has important implications for the mechanisms that are involved in perceptual changes during the processes of language learning and acculturation.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
Technological innovations have had a profound influence on how we study the sensory perception in humans and other animals. One example was the introduction of affordable computers, which radically changed the nature of visual experiments. It is clear that vision research is now at cusp of a similar shift, this time driven by the use of commercially available, low-cost, high- fidelity virtual reality (VR). In this review we will focus on: (a) the research questions VR allows experimenters to address and why these research questions are important, (b) the things that need to be considered when using VR to study human perception, (c) the drawbacks of current VR systems, and (d) the future direction vision research may take, now that VR has become a viable research tool.
Resumo:
Dance is a rich source of material for researchers interested in the integration of movement and cognition. The multiple aspects of embodied cognition involved in performing and perceiving dance have inspired scientists to use dance as a means for studying motor control, expertise, and action-perception links. The aim of this review is to present basic research on cognitive and neural processes implicated in the execution, expression, and observation of dance, and to bring into relief contemporary issues and open research questions. The review addresses six topics: 1) dancers’ exemplary motor control, in terms of postural control, equilibrium maintenance, and stabilization; 2) how dancers’ timing and on-line synchronization are influenced by attention demands and motor experience; 3) the critical roles played by sequence learning and memory; 4) how dancers make strategic use of visual and motor imagery; 5) the insights into the neural coupling between action and perception yielded through exploration of the brain architecture mediating dance observation; and 6) a neuroaesthetics perspective that sheds new light on the way audiences perceive and evaluate dance expression. Current and emerging issues are presented regarding future directions that will facilitate the ongoing dialogue between science and dance.
Resumo:
Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.
Resumo:
Given capacity limits, only a subset of stimuli 1 give rise to a conscious percept. Neurocognitive models suggest that humans have evolved mechanisms that operate without awareness and prioritize threatening stimuli over neutral stimuli in subsequent perception. In this meta analysis, we review evidence for this ‘standard hypothesis’ emanating from three widely used, but rather different experimental paradigms that have been used to manipulate awareness. We found a small pooled threat-bias effect in the masked visual probe paradigm, a medium effect in the binocular rivalry paradigm and highly inconsistent effects in the breaking continuous flash suppression paradigm. Substantial heterogeneity was explained by the stimulus type: the only threat stimuli that were robustly prioritized across all three paradigms were fearful faces. Meta regression revealed that anxiety may modulate threat biases, but only under specific presentation conditions. We also found that insufficiently rigorous awareness measures, inadequate control of response biases and low level confounds may undermine claims of genuine unconscious threat processing. Considering the data together, we suggest that uncritical acceptance of the standard hypothesis is premature: current behavioral evidence for threat-sensitive visual processing that operates without awareness is weak.