956 resultados para auditory cues
Resumo:
We report two studies of the distinct effects that a word's age of acquisition (AoA) and frequency have on the mental lexicon. In the first study, a purely statistical analysis, we show that AoA and frequency are related in different ways to the phonological form and imageability of different words. In the second study, three groups of participants (34 seven-year-olds, 30 ten-year-olds, and 17 adults) took part in an auditory lexical decision task, with stimuli varying in AoA, frequency, length, neighbourhood density, and imageability. The principal result is that the influence of these different variables changes as a function of AoA: Neighbourhood density effects are apparent for early and late AoA words, but not for intermediate AoA, whereas imageability effects are apparent for intermediate AoA words but not for early or late AoA. These results are discussed from the perspective that AoA affects a word's representation, but frequency affects processing biases.
Resumo:
Parkinson's disease patients may have difficulty decoding prosodic emotion cues. These data suggest that the basal ganglia are involved, but may reflect dorsolateral prefrontal cortex dysfunction. An auditory emotional n-back task and cognitive n-back task were administered to 33 patients and 33 older adult controls, as were an auditory emotional Stroop task and cognitive Stroop task. No deficit was observed on the emotion decoding tasks; this did not alter with increased frontal lobe load. However, on the cognitive tasks, patients performed worse than older adult controls, suggesting that cognitive deficits may be more prominent. The impact of frontal lobe dysfunction on prosodic emotion cue decoding may only become apparent once frontal lobe pathology rises above a threshold.
Resumo:
This article explores young infants' ability to learn new words in situations providing tightly controlled social and salience cues to their reference. Four experiments investigated whether, given two potential referents, 15-month-olds would attach novel labels to (a) an image toward which a digital recording of a face turned and gazed, (b) a moving image versus a stationary image, (c) a moving image toward which the face gazed, and (d) a gazed-on image versus a moving image. Infants successfully used the recorded gaze cue to form new word-referent associations and also showed learning in the salience condition. However, their behavior in the salience condition and in the experiments that followed suggests that, rather than basing their judgments of the words' reference on the mere presence or absence of the referent's motion, infants were strongly biased to attend to the consistency with which potential referents moved when a word was heard. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Purpose. Accommodation can mask hyperopia and reduce the accuracy of non-cycloplegic refraction. It is, therefore, important to minimize accommodation to obtain a measure of hyperopia as accurate as possible. To characterize the parameters required to measure the maximally hyperopic error using photorefraction, we used different target types and distances to determine which target was most likely to maximally relax accommodation and thus more accurately detect hyperopia in an individual. Methods. A PlusoptiX SO4 infra-red photorefractor was mounted in a remote haploscope which presented the targets. All participants were tested with targets at four fixation distances between 0.3 and 2 m containing all combinations of blur, disparity, and proximity/looming cues. Thirty-eight infants (6 to 44 weeks) were studied longitudinally, and 104 children [4 to 15 years (mean 6.4)] and 85 adults, with a range of refractive errors and binocular vision status, were tested once. Cycloplegic refraction data were available for a sub-set of 59 participants spread across the age range. Results. The maximally hyperopic refraction (MHR) found at any time in the session was most frequently found when fixating the most distant targets and those containing disparity and dynamic proximity/looming cues. Presence or absence of blur was less significant, and targets in which only single cues to depth were present were also less likely to produce MHR. MHR correlated closely with cycloplegic refraction (r = 0.93, mean difference 0.07 D, p = n.s., 95% confidence interval +/-<0.25 D) after correction by a calibration factor. Conclusions. Maximum relaxation of accommodation occurred for binocular targets receding into the distance. Proximal and disparity cues aid relaxation of accommodation to a greater extent than blur, and thus non-cycloplegic refraction targets should incorporate these cues. This is especially important in screening contexts with a brief opportunity to test for significant hyperopia. MHR in our laboratory was found to be a reliable estimation of cycloplegic refraction. (Optom Vis Sci 2009;86:1276-1286)
Resumo:
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.
Resumo:
Two experiments examine the effect on an immediate recall test of simulating a reverberant auditory environment in which auditory distracters in the form of speech are played to the participants (the 'irrelevant sound effect'). An echo-intensive environment simulated by the addition of reverberation to the speech reduced the extent of 'changes in state' in the irrelevant speech stream by smoothing the profile of the waveform. In both experiments, the reverberant auditory environment produced significantly smaller irrelevant sound distraction effects than an echo-free environment. Results are interpreted in terms of changing-state hypothesis, which states that acoustic content of irrelevant sound, rather than phonology or semantics, determines the extent of the irrelevant sound effect (ISE). Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
The 'irrelevant sound effect' in short-term memory is commonly believed to entail a number of direct consequences for cognitive performance in the office and other workplaces (e.g. S. P. Banbury, S. Tremblay, W. J. Macken, & D. M. Jones, 2001). It may also help to identify what types of sound are most suitable as auditory warning signals. However, the conclusions drawn are based primarily upon evidence from a single task (serial recall) and a single population (young adults). This evidence is reconsidered from the standpoint of different worker populations confronted with common workplace tasks and auditory environments. Recommendations are put forward for factors to be considered when assessing the impact of auditory distraction in the workplace. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Single point interaction haptic devices do not provide the natural grasp and manipulations found in the real world, as afforded by multi-fingered haptics. The present study investigates a two-fingered grasp manipulation involving rotation with and without force feedback. There were three visual cue conditions: monocular, binocular and projective lighting. Performance metrics of time and positional accuracy were assessed. The results indicate that adding haptics to an object manipulation task increases the positional accuracy but slightly increases the overall time taken.
Resumo:
Background: As people age, language-processing ability changes. While several factors modify discourse comprehension ability in older adults, syntactic complexity of auditory discourse has received scant attention. This is despite the widely researched domain of syntactic processing of single sentences in older adults. Aims: The aims of this study were to investigate the ability of healthy older adults to understand stories that differed in syntactic complexity, and its relation to working memory. Methods & Procedures: A total of 51 healthy adults (divided into three age groups) took part. They listened to brief stories (syntactically simple and syntactically complex) and had to respond to false/true comprehension probes following each story. Working memory capacity (digit span, forward and backward) was also measured. Outcomes & Results: Differences were found in the ability of healthy older adults to understand simple and complex discourse. The complex discourse in particular was more sensitive in discerning age-related language patterns. Only the complex discourse task correlated moderately with age. There was no correlation between age and simple discourse. As far as working memory is concerned, moderate correlations were found between working memory and complex discourse. Education did not correlate with discourse, neither simple, nor complex. Conclusions: Older adults may be less efficient in forming syntactically complex representations and this may be influenced by limitations in working memory.
Resumo:
In a “busy” auditory environment listeners can selectively attend to one of several simultaneous messages by tracking one listener's voice characteristics. Here we ask how well other cues compete for attention with such characteristics, using variations in the spatial position of sound sources in a (virtual) seminar room. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played with a simultaneous “distracter” context that had a different wording. Talker difference was in competition with a position difference, so that the target‐word chosen indicates which cue‐type the listener was tracking. The main findings are that room‐acoustic factors provide some tracking cues, whose salience increases with distance separation. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. The room‐acoustic factors might therefore be the spectral‐ and temporal‐envelope effects of reverberation on the timbre of speech. By contrast, the salience of cues associated with differences in sounds' bearings tends to decrease with distance, and these cues are more effective in dichotic conditions. In other conditions, where a distance and a bearing difference cooperate, they can completely override a talker difference at various distances.