30 resultados para visual perception
em CentAUR: Central Archive University of Reading - UK
Resumo:
The visual perception of size in different regions of external space was studied in Parkinson's disease (PD). A group of patients with worse left-sided symptoms (LPD) was compared with a group with worse right-sided symptoms (RPD) and with a group of age-matched controls on judgements of the relative height or width of two rectangles presented in different regions of external space. The relevant dimension of one rectangle (the 'standard') was held constant, while that of the other (the 'variable') was varied in a method of constant stimuli. The point of subjective equality (PSE) of rectangle width or height was obtained by probit analysis as the mean of the resulting psychometric function. When the standard was in left space, the PSE of the LPD group occurred when the variable was smaller, and when the standard was in right space, when the variable was larger. Similarly, when the standard rectangle was presented in upper space, and the variable in lower space, the PSE occurred when the variable was smaller, an effect which was similar in both left and right spaces. In all these experiments, the PSEs for both the controls and the RPD group did not differ significantly, and were close to a physical match, and the slopes of the psychometric functions were steeper in the controls than the patients, though not significantly so. The data suggest that objects appear smaller in the left and upper visual spaces in LPD, probably because of right hemisphere impairment. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Between 8 and 40% of Parkinson disease (PD) patients will have visual hallucinations (VHs) during the course of their illness. Although cognitive impairment has been identified as a risk factor for hallucinations, more specific neuropsychological deficits underlying such phenomena have not been established. Research in psychopathology has converged to suggest that hallucinations are associated with confusion between internal representations of events and real events (i.e. impaired-source monitoring). We evaluated three groups: 17 Parkinson's patients with visual hallucinations, 20 Parkinson's patients without hallucinations and 20 age-matched controls, using tests of visual imagery, visual perception and memory, including tests of source monitoring and recollective experience. The study revealed that Parkinson's patients with hallucinations appear to have intact visual imagery processes and spatial perception. However, there were impairments in object perception and recognition memory, and poor recollection of the encoding episode in comparison to both non-hallucinating Parkinson's patients and healthy controls. Errors were especially likely to occur when encoding and retrieval cues were in different modalities. The findings raise the possibility that visual hallucinations in Parkinson's patients could stem from a combination of faulty perceptual processing of environmental stimuli, and less detailed recollection of experience combined with intact image generation. (C) 2002 Elsevier Science Ltd. All fights reserved.
Resumo:
This paper presents a previously unpublished Attic lekythos and discusses visual ambiguity as an intentional drawing style used by a vase painter who conceptualised the many possible relationships between pot and user, object and subject. The Gela Painter endowed this hastily manufactured and decorated lekythos with visual effects that drew the viewer into an inherently ambivalent motif: a mounting Dionysos. This motif, like other Dionysian themes, had a vogue in late Archaic times but did not necessarily invoke chthonic associations. It had the potential to be consumed in diverse contexts, including religious festivals, by a wide range of audiences. Such images were not given to the viewer fully through visual perception but through interpretation.
Resumo:
Color perception has been a traditional test-case of the idea that the language we speak affects our perception of the world.1 It is now established that categorical perception of color is verbally mediated and varies with culture and language.2 However, it is unknown whether the well-demonstrated language effects on color discrimination really reach down to the level of visual perception, or whether they only reflect post-perceptual cognitive processes. Using brain potentials in a color oddball detection task with Greek and English speakers, we demonstrate that language effects may exist at a level that is literally perceptual, suggesting that speakers of different languages have differently structured minds.
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
Retinal blurring resulting from the human eye's depth of focus has been shown to assist visual perception. Infinite focal depth within stereoscopically displayed virtual environments may cause undesirable effects, for instance, objects positioned at a distance in front of or behind the observer's fixation point will be perceived in sharp focus with large disparities thereby causing diplopia. Although published research on incorporation of synthetically generated Depth of Field (DoF) suggests that this might act as an enhancement to perceived image quality, no quantitative testimonies of perceptional performance gains exist. This may be due to the difficulty of dynamic generation of synthetic DoF where focal distance is actively linked to fixation distance. In this paper, such a system is described. A desktop stereographic display is used to project a virtual scene in which synthetically generated DoF is actively controlled from vergence-derived distance. A performance evaluation experiment on this system which involved subjects carrying out observations in a spatially complex virtual environment was undertaken. The virtual environment consisted of components interconnected by pipes on a distractive background. The subject was tasked with making an observation based on the connectivity of the components. The effects of focal depth variation in static and actively controlled focal distance conditions were investigated. The results and analysis are presented which show that performance gains may be achieved by addition of synthetic DoF. The merits of the application of synthetic DoF are discussed.
Resumo:
Perceptual multimedia quality is of paramount importance to the continued take-up and proliferation of multimedia applications: users will not use and pay for applications if they are perceived to be of low quality. Whilst traditionally distributed multimedia quality has been characterised by Quality of Service (QoS) parameters, these neglect the user perspective of the issue of quality. In order to redress this shortcoming, we characterise the user multimedia perspective using the Quality of Perception (QoP) metric, which encompasses not only a user’s satisfaction with the quality of a multimedia presentation, but also his/her ability to analyse, synthesise and assimilate informational content of multimedia. In recognition of the fact that monitoring eye movements offers insights into visual perception, as well as the associated attention mechanisms and cognitive processes, this paper reports on the results of a study investigating the impact of differing multimedia presentation frame rates on user QoP and eye path data. Our results show that provision of higher frame rates, usually assumed to provide better multimedia presentation quality, do not significantly impact upon the median coordinate value of eye path data. Moreover, higher frame rates do not significantly increase level of participant information assimilation, although they do significantly improve overall user enjoyment and quality perception of the multimedia content being shown.
Resumo:
A strong body of work has explored the interaction between visual perception and language comprehension; for example, recent studies exploring predictions from embodied cognition have focused particularly on the common representation of sensory—motor and semantic information. Motivated by this background, we provide a set of norms for the axis and direction of motion implied in 299 English verbs, collected from approximately 100 native speakers of British English. Until now, there have been no freely available norms of this kind for a large set of verbs that can be used in any area of language research investigating the semantic representation of motion. We have used these norms to investigate the interaction between language comprehension and low-level visual processes involved in motion perception, validating the norming procedure’s ability to capture the motion content of individual verbs. Supplemental materials for this study may be downloaded from brm.psychonomic-journals.org/content/supplemental.
Resumo:
Observers generally fail to recover three-dimensional shape accurately from binocular disparity. Typically, depth is overestimated at near distances and underestimated at far distances [Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351–1360]. A simple prediction from this is that disparity-defined objects should appear to expand in depth when moving towards the observer, and compress in depth when moving away. However, additional information is provided when an object moves from which 3D Euclidean shape can be recovered, be this through the addition of structure from motion information [Richards, W. (1985). Structure from stereo and motion. Journal of the Optical Society of America A, 2, 343–349], or the use of non-generic strategies [Todd, J. T., & Norman, J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception and Psychophysics, 65, 31–47]. Here, we investigated shape constancy for objects moving in depth. We found that to be perceived as constant in shape, objects needed to contract in depth when moving toward the observer, and expand in depth when moving away, countering the effects of incorrect distance scaling (Johnston, 1991). This is a striking example of the failure of shape con- stancy, but one that is predicted if observers neither accurately estimate object distance in order to recover Euclidean shape, nor are able to base their responses on a simpler processing strategy.
Resumo:
This paper describes experiments relating to the perception of the roughness of simulated surfaces via the haptic and visual senses. Subjects used a magnitude estimation technique to judge the roughness of “virtual gratings” presented via a PHANToM haptic interface device, and a standard visual display unit. It was shown that under haptic perception, subjects tended to perceive roughness as decreasing with increased grating period, though this relationship was not always statistically significant. Under visual exploration, the exact relationship between spatial period and perceived roughness was less well defined, though linear regressions provided a reliable approximation to individual subjects’ estimates.
Resumo:
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.
Resumo:
The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RE Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.