95 resultados para auditory masking
Resumo:
Posterior cortical atrophy (PCA) is a type of dementia that is characterized by visuo-spatial and memory deficits, dyslexia and dysgraphia, relatively early onset and preserved insight. Language deficits have been reported in some cases of PCA. Using an off-line grammaticality judgement task, processing of wh-questions is investigated in a case of PCA. Other aspects of auditory language are also reported. It is shown that processing of wh-questions is influenced by syntactic structure, a novel finding in this condition. The results are discussed with reference to accounts of wh-questions in aphasia. An uneven profile of other language abilities is reported with deficits in digit span (forward, backward), story retelling ability, comparative questions but intact abilities in following commands, repetition, concept definition, generative naming and discourse comprehension.
Resumo:
Parkinson's disease patients may have difficulty decoding prosodic emotion cues. These data suggest that the basal ganglia are involved, but may reflect dorsolateral prefrontal cortex dysfunction. An auditory emotional n-back task and cognitive n-back task were administered to 33 patients and 33 older adult controls, as were an auditory emotional Stroop task and cognitive Stroop task. No deficit was observed on the emotion decoding tasks; this did not alter with increased frontal lobe load. However, on the cognitive tasks, patients performed worse than older adult controls, suggesting that cognitive deficits may be more prominent. The impact of frontal lobe dysfunction on prosodic emotion cue decoding may only become apparent once frontal lobe pathology rises above a threshold.
Resumo:
In immediate recall tasks, visual recency is substantially enhanced when output interference is low (Cowan, Saults, Elliott, & Moreno, 2002; Craik, 1969) whereas auditory recency remains high even under conditions of high output interference. Ibis auditory advantage has been interpreted in terms of auditory resistance to output interference (e.g., Neath & Surprenant, 2003). In this study the auditory-visual difference at low output interference re-emerged when ceiling effects were accounted for, but only with spoken output. With written responding the auditory advantage remained significantly larger with high than with low output interference. These new data suggest that both superior auditory encoding and modality-specific output interference contribute to the classic auditory-visual modality effect.
Resumo:
Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (Hadlington, Bridges, & Darby, 2004). Experiments 1 and 2 examined the effects of manipulating the location of irrelevant sound on either a mental arithmetic task (Banbury & Berry, 1998) or a missing-item task (Jones & Macken, 1993; Experiment 4). Experiment 3 altered the amount of change in the irrelevant stream to assess how this affected the level of interference elicited. Two prerequisites appear necessary to produce the left-ear disadvantage; the presence of ordered structural changes in the irrelevant sound and the requirement for serial order processing of the attended information. The existence of a left-ear disadvantage highlights the role of the right hemisphere in the obligatory processing of auditory information. (c) 2006 Published by Elsevier Inc.
Resumo:
High-span individuals (as measured by the operation span [CSPAN] technique) are less likely than low-span individuals to notice their own names in an unattended auditory stream (A. R. A. Conway, N. Cowan, & M F. Bunting, 2001). The possibility that OSPAN accounts for individual differences in auditory distraction on an immediate recall test was examined. There was no evidence that high-OSPAN participants were more resistant to the disruption caused by irrelevant speech in serial or in free recall. Low-OSPAN participants did, however, make more semantically related intrusion errors from the irrelevant sound stream in a free recall test (Experiment 4). Results suggest that OSPAN mediates semantic components of auditory distraction dissociable from other aspects of the irrelevant sound effect.
Resumo:
Efficient Navigation is essential for the user-acceptance of the Virtual Environments (VEs), but it is also inherently, a difficult task to perform. Resulting research in the area provides users with great variety of navigation assistance in VEs however it is still known to be inadequate, complex and suffers through many limitations. In this paper we discuss the task of navigation in the virtual environments and record the wayfinding assistance currently available for the VEs. The paper introduces taxonomy of navigation and categorizes the aids on basis of the functions performed. The paper provides views on current work performed in the area of non-speech auditory aids. Further we conclude by providing views on the important areas that require further investigation and research.
Resumo:
The experiment asks whether constancy in hearing precedes or follows grouping. Listeners heard speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those arising in these filters when a speech message is played. The „context‟ words in the message were “next you‟ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum that was formed by amplitude modulation. Listeners identified the test words appropriately and quite consistently, even though they had the „robotic‟ quality typical of this type of 8-band speech. The speech-like effects of these sounds appears to be a consequence of auditory grouping. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of reflections, or where it had a much lower level. Constancy effects were obtained with these 8-band sounds, but only in „matched‟ conditions, where the room reflections were in the same bands in both the context and the test word. This was not the case in a comparison „mismatched‟ condition, and here, no constancy effects were found. It would appear that this type of constancy in hearing precedes the across-channel grouping whose effects are so apparent in these sounds. This result is discussed in terms of the ubiquity of grouping across different levels of representation.
Resumo:
Perceptual constancy effects are observed when differing amounts of reverberation are applied to a context sentence and a test‐word embedded in it. Adding reverberation to members of a “sir”‐“stir” test‐word continuum causes temporal‐envelope distortion, which has the effect of eliciting more sir responses from listeners. If the same amount of reverberation is also applied to the context sentence, the number of sir responses decreases again, indicating an “extrinsic” compensation for the effects of reverberation. Such a mechanism would effect perceptual constancy of phonetic perception when temporal envelopes vary in reverberation. This experiment asks whether such effects precede or follow grouping. Eight auditory‐filter shaped noise‐bands were modulated with the temporal envelopes that arise when speech is played through these filters. The resulting “gestalt” percept is the appropriate speech rather than the sound of noise‐bands, presumably due to across‐channel “grouping.” These sounds were played to listeners in “matched” conditions, where reverberation was present in the same bands in both context and test‐word, and in “mismatched” conditions, where the bands in which reverberation was added differed between context and test‐word. Constancy effects were obtained in matched conditions, but not in mismatched conditions, indicating that this type of constancy in hearing precedes across‐channel grouping.
Resumo:
In a “busy” auditory environment listeners can selectively attend to one of several simultaneous messages by tracking one listener's voice characteristics. Here we ask how well other cues compete for attention with such characteristics, using variations in the spatial position of sound sources in a (virtual) seminar room. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played with a simultaneous “distracter” context that had a different wording. Talker difference was in competition with a position difference, so that the target‐word chosen indicates which cue‐type the listener was tracking. The main findings are that room‐acoustic factors provide some tracking cues, whose salience increases with distance separation. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. The room‐acoustic factors might therefore be the spectral‐ and temporal‐envelope effects of reverberation on the timbre of speech. By contrast, the salience of cues associated with differences in sounds' bearings tends to decrease with distance, and these cues are more effective in dichotic conditions. In other conditions, where a distance and a bearing difference cooperate, they can completely override a talker difference at various distances.
Resumo:
This paper addresses the nature and cause of Specific Language Impairment (SLI) by reviewing recent research in sentence processing of children with SLI compared to typically developing (TD) children and research in infant speech perception. These studies have revealed that children with SLI are sensitive to syntactic, semantic, and real-world information, but do not show sensitivity to grammatical morphemes with low phonetic saliency, and they show longer reaction times than age-matched controls. TD children from the age of 4 show trace reactivation, but some children with SLI fail to show this effect, which resembles the pattern of adults and TD children with low working memory. Finally, findings from the German Language Development (GLAD) Project have revealed that a group of children at risk for SLI had a history of an auditory delay and impaired processing of prosodic information in the first months of their life, which is not detectable later in life. Although this is a single project that needs to be replicated with a larger group of children, it provides preliminary support for accounts of SLI which make an explicit link between an early deficit in the processing of phonology and later language deficits, and the Computational Complexity Hypothesis that argues that the language deficit in children with SLI lies in difficulties integrating different types of information at the interfaces.
Resumo:
A speech message played several metres from the listener in a room is usually heard to have much the same phonetic content as it does when played nearby, even though the different amounts of reflected sound make the temporal envelopes of these signals very different. To study this ‘constancy’ effect, listeners heard speech messages and speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those in these filters when the speech message is played. The ‘contexts’ were “next you’ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum, formed by amplitude modulation. Listeners identified the test words appropriately, even in the 8-band conditions where the speech had a ‘robotic’ quality. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of room reflections (i.e. from the same, far distance), or where it had a much lower level (i.e. from nearby). Constancy effects were obtained with both the natural- and the 8-band speech. Results are considered in terms of the degree of ‘matching’ between the context’s and test-word’s bands.
Resumo:
Non-word repetition (NWR) was investigated in adolescents with typical development, Specific Language Impairment (SLI) and Autism Plus language Impairment (ALI) (n = 17, 13, 16, and mean age 14;4, 15;4, 14;8 respectively). The study evaluated the hypothesis that poor NWR performance in both groups indicates an overlapping language phenotype (Kjelgaard & Tager-Flusberg, 2001). Performance was investigated both quantitatively, e.g. overall error rates, and qualitatively, e.g. effect of length on repetition, proportion of errors affecting phonological structure, and proportion of consonant substitutions involving manner changes. Findings were consistent with previous research (Whitehouse, Barry, & Bishop, 2008) demonstrating a greater effect of length in the SLI group than the ALI group, which may be due to greater short-term memory limitations. In addition, an automated count of phoneme errors identified poorer performance in the SLI group than the ALI group. These findings indicate differences in the language profiles of individuals with SLI and ALI, but do not rule out a partial overlap. Errors affecting phonological structure were relatively frequent, accounting for around 40% of phonemic errors, but less frequent than straight Consonant-for-Consonant or vowel-for-vowel substitutions. It is proposed that these two different types of errors may reflect separate contributory mechanisms. Around 50% of consonant substitutions in the clinical groups involved manner changes, suggesting poor auditory-perceptual encoding. From a clinical perspective algorithms which automatically count phoneme errors may enhance sensitivity of NWR as a diagnostic marker of language impairment. Learning outcomes: Readers will be able to (1) describe and evaluate the hypothesis that there is a phenotypic overlap between SLI and Autism Spectrum Disorders (2) describe differences in the NWR performance of adolescents with SLI and ALI, and discuss whether these differences support or refute the phenotypic overlap hypothesis, and (3) understand how computational algorithms such as the Levenshtein Distance may be used to analyse NWR data.
Resumo:
The recency effect found in free recall can be accounted for almost entirely in terms of the recall of ordered sequences of items. It is such sequences, presented at the end of the stimulus list but recalled at the very beginning of the response protocol, which produce a recency effect. Such sequences are recalled at the beginning of the response protocol equally often following auditory and visual presentation. These same stimulus sequences are also frequently recalled other than initially in the response protocol following auditory presentation. However, such responses are rarely found following visual presentation. The modality effect in free recall, the advantage of auditory over visual presentation, can be substantially accounted for in these terms. Theoretical and procedural implications of these data are discussed.
Resumo:
This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch,), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.