83 resultados para auditory hallucinations
Resumo:
Three experiments examined transfer across form (words/pictures) and modality (visual/ auditory) in written word, auditory word, and pictorial implicit memory tests, as well as on a free recall task. Experiment 1 showed no significant transfer across form on any of the three implicit memory tests,and an asymmetric pattern of transfer across modality. In contrast, the free recall results revealed a very different picture. Experiment 2 further investigated the asymmetric modality effects obtained for the implicit memory measures by employing articulatory suppression and picture naming to control the generation of phonological codes. Finally, Experiment 3 examined the effects of overt word naming and covert picture labelling on transfer between study and test form. The results of the experiments are discussed in relation to Tulving and Schacter's (1990) Perceptual Representation Systems framework and Roediger's (1990) Transfer Appropriate Processing theory.
Resumo:
Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.
Resumo:
Two studies examine the experience of “earworms”, unwanted catchy tunes that repeat. Survey data show that the experience is widespread but earworms are not generally considered problematic, although those who consider music to be important to them report earworms as longer, and harder to control, than those who consider music as less important. The tunes which produce these experiences vary considerably between individuals but are always familiar to those who experience them. A diary study confirms these findings and also indicates that, although earworm recurrence is relatively uncommon and unlikely to persist for longer than 24 hours, the length of both the earworm and the earworm experience frequently exceed standard estimates of auditory memory capacity. Active attempts to block or eliminate the earworm are less successful than passive acceptance, consistent with Wegner’s (1994) theory of ironic mental control.
Resumo:
It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of “glimpsing” of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.
Resumo:
Background noise should in theory hinder detection of auditory cues associated with approaching danger. We tested whether foraging chaffinches Fringilla coelebs responded to background noise by increasing vigilance, and examined whether this was explained by predation risk compensation or by a novel stimulus hypothesis. The former predicts that only inter-scan interval should be modified in the presence of background noise, not vigilance levels generally. This is because noise hampers auditory cue detection and increases perceived predation risk primarily when in the head-down position, and also because previous tests have shown that only interscan interval is correlated with predator detection ability in this system. Chaffinches only modified interscan interval supporting this hypothesis. At the same time they made significantly fewer pecks when feeding during the background noise treatment and so the increased vigilance led to a reduction in intake rate, suggesting that compensating for the increased predation risk could indirectly lead to a fitness cost. Finally, the novel stimulus hypothesis predicts that chaffinches should habituate to the noise, which did not occur within a trial or over 5 subsequent trials. We conclude that auditory cues may be an important component of the trade-off between vigilance and feeding, and discuss possible implications for anti-predation theory and ecological processes
Resumo:
Environmental conditions during the early life stages of birds can have significant effects on the quality of sexual signals in adulthood, especially song, and these ultimately have consequences for breeding success and fitness. This has wide-ranging implications for the rehabilitation protocols undertaken in wildlife hospitals which aim to return captive-reared animals to their natural habitat. Here we review the current literature on bird song development and learning in order to determine the potential impact that the rearing of juvenile songbirds in captivity can have on rehabilitation success. We quantify the effects of reduced learning on song structure and relate this to the possible effects on an individual's ability to defend a territory or attract a mate. We show the importance of providing a conspecific auditory model for birds to learn from in the early stages post-fledging, either via live- or tape-tutoring and provide suggestions for tutoring regimes. We also highlight the historical focus on learning in a few model species that has left an information gap in our knowledge for most species reared at wildlife hospitals.
Resumo:
Perceptual compensation for reverberation was measured by embedding test words in contexts that were either spoken phrases or processed versions of this speech. The processing gave steady-spectrum contexts with no changes in the shape of the short-term spectral envelope over time, but with fluctuations in the temporal envelope. Test words were from a continuum between "sir" and "stir." When the amount of reverberation in test words was increased, to a level above the amount in the context, they sounded more like "sir." However, when the amount of reverberation in the context was also increased, to the level present in the test word, there was perceptual compensation in some conditions so that test words sounded more like "stir" again. Experiments here found compensation with speech contexts and with some steady-spectrum contexts, indicating that fluctuations in the context's temporal envelope can be sufficient for compensation. Other results suggest that the effectiveness of speech contexts is partly due to the narrow-band "frequency-channels" of the auditory periphery, where temporal-envelope fluctuations can be more pronounced than they are in the sound's broadband temporal envelope. Further results indicate that for compensation to influence speech, the context needs to be in a broad range of frequency channels. (c) 2007 Acoustical Society of America.
Resumo:
The present research sought to investigate the role of the basal ganglia in timing of sub- and supra-second intervals via an examination of the ability of people with Parkinson's disease (PD) to make temporal judgments in two ranges, 100-500 ms, and 1-5 s. Eighteen nondemented medicated patients with PD were compared with 14 matched controls on a duration-bisection task in which participants were required to discriminate auditory and visual signal durations within each time range. Results showed that patients with PD exhibited more variable duration judgments across both signal modality and duration range than controls, although closer analyses confirmed a timing deficit in the longer duration range only. The findings presented here suggest the bisection procedure may be a useful tool in identifying timing impairments in PD and, more generally, reaffirm the hypothesised role of the basal ganglia in temporal perception at the level of the attentionally mediated internal clock as well as memory retrieval and/or decision-making processes. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of "glimpsing" of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.
Resumo:
Posterior cortical atrophy (PCA) is a type of dementia that is characterized by visuo-spatial and memory deficits, dyslexia and dysgraphia, relatively early onset and preserved insight. Language deficits have been reported in some cases of PCA. Using an off-line grammaticality judgement task, processing of wh-questions is investigated in a case of PCA. Other aspects of auditory language are also reported. It is shown that processing of wh-questions is influenced by syntactic structure, a novel finding in this condition. The results are discussed with reference to accounts of wh-questions in aphasia. An uneven profile of other language abilities is reported with deficits in digit span (forward, backward), story retelling ability, comparative questions but intact abilities in following commands, repetition, concept definition, generative naming and discourse comprehension.
Resumo:
Parkinson's disease patients may have difficulty decoding prosodic emotion cues. These data suggest that the basal ganglia are involved, but may reflect dorsolateral prefrontal cortex dysfunction. An auditory emotional n-back task and cognitive n-back task were administered to 33 patients and 33 older adult controls, as were an auditory emotional Stroop task and cognitive Stroop task. No deficit was observed on the emotion decoding tasks; this did not alter with increased frontal lobe load. However, on the cognitive tasks, patients performed worse than older adult controls, suggesting that cognitive deficits may be more prominent. The impact of frontal lobe dysfunction on prosodic emotion cue decoding may only become apparent once frontal lobe pathology rises above a threshold.
Resumo:
In immediate recall tasks, visual recency is substantially enhanced when output interference is low (Cowan, Saults, Elliott, & Moreno, 2002; Craik, 1969) whereas auditory recency remains high even under conditions of high output interference. Ibis auditory advantage has been interpreted in terms of auditory resistance to output interference (e.g., Neath & Surprenant, 2003). In this study the auditory-visual difference at low output interference re-emerged when ceiling effects were accounted for, but only with spoken output. With written responding the auditory advantage remained significantly larger with high than with low output interference. These new data suggest that both superior auditory encoding and modality-specific output interference contribute to the classic auditory-visual modality effect.
Resumo:
Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (Hadlington, Bridges, & Darby, 2004). Experiments 1 and 2 examined the effects of manipulating the location of irrelevant sound on either a mental arithmetic task (Banbury & Berry, 1998) or a missing-item task (Jones & Macken, 1993; Experiment 4). Experiment 3 altered the amount of change in the irrelevant stream to assess how this affected the level of interference elicited. Two prerequisites appear necessary to produce the left-ear disadvantage; the presence of ordered structural changes in the irrelevant sound and the requirement for serial order processing of the attended information. The existence of a left-ear disadvantage highlights the role of the right hemisphere in the obligatory processing of auditory information. (c) 2006 Published by Elsevier Inc.
Resumo:
High-span individuals (as measured by the operation span [CSPAN] technique) are less likely than low-span individuals to notice their own names in an unattended auditory stream (A. R. A. Conway, N. Cowan, & M F. Bunting, 2001). The possibility that OSPAN accounts for individual differences in auditory distraction on an immediate recall test was examined. There was no evidence that high-OSPAN participants were more resistant to the disruption caused by irrelevant speech in serial or in free recall. Low-OSPAN participants did, however, make more semantically related intrusion errors from the irrelevant sound stream in a free recall test (Experiment 4). Results suggest that OSPAN mediates semantic components of auditory distraction dissociable from other aspects of the irrelevant sound effect.