10 resultados para Visual experience
em CentAUR: Central Archive University of Reading - UK
Resumo:
Purpose This study investigated whether vergence and accommodation development in pre-term infants is pre-programmed or is driven by experience. Methods 32 healthy infants, born at mean 34 weeks gestation (range 31.2-36 weeks) were compared with 45 healthy full-term infants (mean 40.0 weeks) over a 6 month period, starting at 4-6 weeks post-natally. Simultaneous accommodation and convergence to a detailed target were measured using a Plusoptix PowerRefII infra-red photorefractor as a target moved between 0.33m and 2m. Stimulus/response gains and responses at 0.33m and 2m were compared by both corrected (gestational) age and chronological (post-natal) age. Results When compared by their corrected age, pre-term and full-term infants showed few significant differences in vergence and accommodation responses after 6-7 weeks of age. However, when compared by chronological age, pre-term infants’ responses were more variable, with significantly reduced vergence gains, reduced vergence response at 0.33m, reduced accommodation gain, and increased accommodation at 2m, compared to full-term infants between 8-13 weeks after birth. Conclusions When matched by corrected age, vergence and accommodation in pre-term infants show few differences from full-term infants’ responses. Maturation appears pre-programmed and is not advanced by visual experience. Longer periods of immature visual responses might leave pre-term infants more at risk of development of oculomotor deficits such as strabismus.
Resumo:
This paper describes the development of an interface to a hospital portal system for information, communication and entertainment such that it can be used easily and effectively by all patients regardless of their age, disability, computer experience or native language. Specifically, this paper reports on the work conducted to ensure that the interface design took into account the needs of visually impaired users.
Resumo:
This paper uses the social model of disability to examine visually impaired children's experiences of their housing and neighbourhoods and finds that they did not experience any significant problems with the design of them. The source of their problems was within these environments, and was caused by factors such as the intensity of movement, for example, from flows of traffic. We conclude by discussing the social policy implications of these findings.
Resumo:
Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.
Resumo:
Between 8 and 40% of Parkinson disease (PD) patients will have visual hallucinations (VHs) during the course of their illness. Although cognitive impairment has been identified as a risk factor for hallucinations, more specific neuropsychological deficits underlying such phenomena have not been established. Research in psychopathology has converged to suggest that hallucinations are associated with confusion between internal representations of events and real events (i.e. impaired-source monitoring). We evaluated three groups: 17 Parkinson's patients with visual hallucinations, 20 Parkinson's patients without hallucinations and 20 age-matched controls, using tests of visual imagery, visual perception and memory, including tests of source monitoring and recollective experience. The study revealed that Parkinson's patients with hallucinations appear to have intact visual imagery processes and spatial perception. However, there were impairments in object perception and recognition memory, and poor recollection of the encoding episode in comparison to both non-hallucinating Parkinson's patients and healthy controls. Errors were especially likely to occur when encoding and retrieval cues were in different modalities. The findings raise the possibility that visual hallucinations in Parkinson's patients could stem from a combination of faulty perceptual processing of environmental stimuli, and less detailed recollection of experience combined with intact image generation. (C) 2002 Elsevier Science Ltd. All fights reserved.
Resumo:
This paper describes the development of an interface to a hospital portal system for information, communication and entertainment such that it can be used easily and effectively by all patients regardless of their age, disability, computer experience or native language. Specifically, this paper reports on the work conducted to ensure that the interface design took into account the needs of visually impaired users.
Resumo:
The encoding of goal-oriented motion events varies across different languages. Speakers of languages without grammatical aspect (e.g., Swedish) tend to mention motion endpoints when describing events, e.g., “two nuns walk to a house,”, and attach importance to event endpoints when matching scenes from memory. Speakers of aspect languages (e.g., English), on the other hand, are more prone to direct attention to the ongoingness of motion events, which is reflected both in their event descriptions, e.g., “two nuns are walking.”, and in their non-verbal similarity judgements. This study examines to what extent native speakers of Swedish (n = 82) with English as a foreign language (FL) restructure their categorisation of goal-oriented motion as a function of their English proficiency and experience with the English language (e.g., exposure, learning). Seventeen monolingual native English speakers from the United Kingdom (UK) were engaged for comparison purposes. Data on motion event cognition were collected through a memory-based triads matching task, in which a target scene with an intermediate degree of endpoint orientation was matched with two alternative scenes with low and high degrees of endpoint orientation, respectively. Results showed that the preference among the Swedish speakers of L2 English to base their similarity judgements on ongoingness rather than event endpoints was correlated with their use of English in their everyday lives, such that those who often watched television in English approximated the ongoingness preference of the English native speakers. These findings suggest that event cognition patterns may be restructured through the exposure to FL audio-visual media. The results thus add to the emerging picture that learning a new language entails learning new ways of observing and reasoning about reality.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
This response examines what is overlooked in Sylvester’s analysis of similarities between the US police security response to the Boston marathon bombings (2013) and Kevin Powers’ fictionalized account of the US war operations in Al Tafar, Iraq (2004) and evaluates the consequences for our understanding of contemporary war. This is done by highlighting differences between the experience of residents in Boston and the (real) town of Tal Afar, key among them the insecurity, fear and calamity that result from the distinct political realities in these locations. The experience of war from the perspective of the victims adds an important dimension to the debate over the changing nature of war. At a time that is marked by an unprecedented level of technologization and visual mediation, it brings into focus the fragmentary and often one-sided evidence on which our knowledge of contemporary war is based. It reminds us to ask not only what we know about war, but how we know it.