78 resultados para AUDITORY
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this research, a cross-model paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were presented simultaneously with liked and disliked odours and sounds, with neutral-neutral pairings serving as controls. The results confirm previous findings that the affective evaluation of previously neutral visual stimuli shifts in the direction of contingently presented affective auditory stimuli. In addition, this research shows the presence of conditioning with affective odours having no logical connection with the pictures.
Resumo:
Four experiments investigate the hypothesis that irrelevant sound interferes with serial recall of auditory items in the same fashion as with visually presented items. In Experiment 1 an acoustically changing sequence of 30 irrelevant utterances was more disruptive than 30 repetitions of the same utterance (the changing-state effect; Jones, Madden, & Miles, 1992) whether the to-be-remembered items were visually or auditorily presented. Experiment 2 showed that two different utterances spoken once (a heterogeneous compound suffix; LeCompte & Watkins, 1995) produced less disruption to serial recall than 15 repetitions of the same sequence. Disruption thus depends on the number of sounds in the irrelevant sequence. In Experiments 3a and 3b the number of different sounds, the "token-set" size (Tremblay & Jones, 1998), in an irrelevant sequence also influenced the magnitude of disruption in both irrelevant sound and compound suffix conditions. The results support the view that the disruption of memory for auditory items, like memory for visually presented items, is dependent on the number of different irrelevant sounds presented and the size of the set from which these sounds are taken. Theoretical implications are discussed.
Resumo:
We report two studies of the distinct effects that a word's age of acquisition (AoA) and frequency have on the mental lexicon. In the first study, a purely statistical analysis, we show that AoA and frequency are related in different ways to the phonological form and imageability of different words. In the second study, three groups of participants (34 seven-year-olds, 30 ten-year-olds, and 17 adults) took part in an auditory lexical decision task, with stimuli varying in AoA, frequency, length, neighbourhood density, and imageability. The principal result is that the influence of these different variables changes as a function of AoA: Neighbourhood density effects are apparent for early and late AoA words, but not for intermediate AoA, whereas imageability effects are apparent for intermediate AoA words but not for early or late AoA. These results are discussed from the perspective that AoA affects a word's representation, but frequency affects processing biases.
Resumo:
We frequently encounter conflicting emotion cues. This study examined how the neural response to emotional prosody differed in the presence of congruent and incongruent lexico-semantic cues. Two hypotheses were assessed: (i) decoding emotional prosody with conflicting lexico-semantic cues would activate brain regions associated with cognitive conflict (anterior cingulate and dorsolateral prefrontal cortex) or (ii) the increased attentional load of incongruent cues would modulate the activity of regions that decode emotional prosody (right lateral temporal cortex). While the participants indicated the emotion conveyed by prosody, functional magnetic resonance imaging data were acquired on a 3T scanner using blood oxygenation level-dependent contrast. Using SPM5, the response to congruent cues was contrasted with that to emotional prosody alone, as was the response to incongruent lexico-semantic cues (for the 'cognitive conflict' hypothesis). The right lateral temporal lobe region of interest analyses examined modulation of activity in this brain region between these two contrasts (for the 'prosody cortex' hypothesis). Dorsolateral prefrontal and anterior cingulate cortex activity was not observed, and neither was attentional modulation of activity in right lateral temporal cortex activity. However, decoding emotional prosody with incongruent lexico-semantic cues was strongly associated with left inferior frontal gyrus activity. This specialist form of conflict is therefore not processed by the brain using the same neural resources as non-affective cognitive conflict and neither can it be handled by associated sensory cortex alone. The recruitment of inferior frontal cortex may indicate increased semantic processing demands but other contributory functions of this region should be explored.
Resumo:
Two experiments examine the effect on an immediate recall test of simulating a reverberant auditory environment in which auditory distracters in the form of speech are played to the participants (the 'irrelevant sound effect'). An echo-intensive environment simulated by the addition of reverberation to the speech reduced the extent of 'changes in state' in the irrelevant speech stream by smoothing the profile of the waveform. In both experiments, the reverberant auditory environment produced significantly smaller irrelevant sound distraction effects than an echo-free environment. Results are interpreted in terms of changing-state hypothesis, which states that acoustic content of irrelevant sound, rather than phonology or semantics, determines the extent of the irrelevant sound effect (ISE). Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
The 'irrelevant sound effect' in short-term memory is commonly believed to entail a number of direct consequences for cognitive performance in the office and other workplaces (e.g. S. P. Banbury, S. Tremblay, W. J. Macken, & D. M. Jones, 2001). It may also help to identify what types of sound are most suitable as auditory warning signals. However, the conclusions drawn are based primarily upon evidence from a single task (serial recall) and a single population (young adults). This evidence is reconsidered from the standpoint of different worker populations confronted with common workplace tasks and auditory environments. Recommendations are put forward for factors to be considered when assessing the impact of auditory distraction in the workplace. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
This paper addresses the crucial problem of wayfinding assistance in the Virtual Environments (VEs). A number of navigation aids such as maps, agents, trails and acoustic landmarks are available to support the user for navigation in VEs, however it is evident that most of the aids are visually dominated. This work-in-progress describes a sound based approach that intends to assist the task of 'route decision' during navigation in a VE using music. Furthermore, with use of musical sounds it aims to reduce the cognitive load associated with other visually as well as physically dominated tasks. To achieve these goals, the approach exploits the benefits provided by music to ease and enhance the task of wayfinding, whilst making the user experience in the VE smooth and enjoyable.
Resumo:
Background: As people age, language-processing ability changes. While several factors modify discourse comprehension ability in older adults, syntactic complexity of auditory discourse has received scant attention. This is despite the widely researched domain of syntactic processing of single sentences in older adults. Aims: The aims of this study were to investigate the ability of healthy older adults to understand stories that differed in syntactic complexity, and its relation to working memory. Methods & Procedures: A total of 51 healthy adults (divided into three age groups) took part. They listened to brief stories (syntactically simple and syntactically complex) and had to respond to false/true comprehension probes following each story. Working memory capacity (digit span, forward and backward) was also measured. Outcomes & Results: Differences were found in the ability of healthy older adults to understand simple and complex discourse. The complex discourse in particular was more sensitive in discerning age-related language patterns. Only the complex discourse task correlated moderately with age. There was no correlation between age and simple discourse. As far as working memory is concerned, moderate correlations were found between working memory and complex discourse. Education did not correlate with discourse, neither simple, nor complex. Conclusions: Older adults may be less efficient in forming syntactically complex representations and this may be influenced by limitations in working memory.
Resumo:
Listeners can attend to one of several simultaneous messages by tracking one speaker’s voice characteristics. Using differences in the location of sounds in a room, we ask how well cues arising from spatial position compete with these characteristics. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played simultaneously with a different “distracter” context. Talker difference was in competition with position difference, so the response indicates which cue‐type the listener was tracking. Spatial position was found to override talker difference in dichotic conditions when the talkers are similar (male). The salience of cues associated with differences in sounds, bearings decreased with distance between listener and sources. These cues are more effective binaurally. However, there appear to be other cues that increase in salience with distance between sounds. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. Distances between spectra calculated using a gammatone filterbank (with ERB‐spaced CFs) of the room’s impulse responses at different locations were computed, and comparison with listeners’ responses suggested some slight monaural loudness cues, but also monaural “timbre” cues arising from the temporal‐ and spectral‐envelope differences in the speech from different locations.
Resumo:
There is a high prevalence of traumatic events within individuals diagnosed with schizophrenia, and of auditory hallucinations within individuals diagnosed with posttraumatic stress disorder (PTSD). However, the relationship between the symptoms associated with these disorders remains poorly understood. We conducted a multidimensional assessment of auditory hallucinations within a sample diagnosed with schizophrenia and substance abuse, both with and without co-morbid PTSD. Results suggest a rate of co-morbid PTSD similar to those reported within other studies. Patients who suffered co-morbid PTSD reported more distressing auditory hallucinations. However, the hallucinations were not more frequent or of longer duration. The need for a multidimensional assessment is supported. Results are discussed within current theoretical accounts of traumatic psychosis.
Resumo:
Recall in many types of verbal memory task is reliably disrupted by the presence of auditory distracters, with verbal distracters frequently proving the most disruptive (Beaman, 2005). A multinomial processing tree model (Schweickert, 1993) is applied to the effects on free recall of background speech from a known or an unknown language. The model reproduces the free recall curve and the impact on memory of verbal distracters for which a lexical entry exists (i.e., verbal items from a known language). The effects of semantic relatedness of distracters within a language is found to depend upon a redintegrative factor thought to reflect the contribution of the speech-production system. The differential impacts of known and unknown languages cannot be accounted for in this way, but the same effects of distraction are observed amongst bilinguals, regardless of distracter-language.
Resumo:
Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. Methods: We examined analysis of basic acoustic stimuli in Wernicke’s aphasia participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in “moving ripple” stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Results: Participants with Wernicke’s aphasia showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both frequency and dynamic modulation detection correlated significantly with auditory comprehension abilities in the Wernicke’s aphasia participants. Conclusion: These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectrotemporal nonverbal stimuli in Wernicke’s aphasia, which may have a causal contribution to the auditory language comprehension impairment Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.
Resumo:
The Retrieval-Induced Forgetting (RIF) paradigm includes three phases: (a) study/encoding of category exemplars, (b) practicing retrieval of a sub-set of those category exemplars, and (c) recall of all exemplars. At the final recall phase, recall of items that belong to the same categories as those items that undergo retrieval-practice, but that do not undergo retrieval-practice, is impaired. The received view is that this is because retrieval of target category-exemplars (e.g., ‘Tiger’ in the category Four-legged animal) requires inhibition of non-target category-exemplars (e.g., ‘Dog’ and ‘Lion’) that compete for retrieval. Here, we used the RIF paradigm to investigate whether ignoring auditory items during the retrieval-practice phase modulates the inhibitory process. In two experiments, RIF was present when retrieval-practice was conducted in quiet and when conducted in the presence of spoken words that belonged to a category other than that of the items that were targets for retrieval-practice. In contrast, RIF was abolished when words that either were identical to the retrieval-practice words or were only semantically related to the retrieval-practice words were presented as background speech. The results suggest that the act of ignoring speech can reduce inhibition of the non-practiced category-exemplars, thereby eliminating RIF, but only when the spoken words are competitors for retrieval (i.e., belong to the same semantic category as the to-be-retrieved items).
Resumo:
(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.
Resumo:
Abstract Background: The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behaviour, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analysing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. Significant differences in inter-examiner results may lead to completely distinct clinical interpretations of the state of the auditory system. In this context, the aim of this research was to evaluate the inter-examiner agreement and variability in the manual classification of ABR. Methods: A total of 160 ABR data samples were collected, for four different stimulus intensity (80dBHL, 60dBHL, 40dBHL and 20dBHL), from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). Four examiners with expertise in the manual classification of ABR components participated in the study. The Bland-Altman statistical method was employed for the assessment of inter-examiner agreement and variability. The mean, standard deviation and error for the bias, which is the difference between examiners’ annotations, were estimated for each pair of examiners. Scatter plots and histograms were employed for data visualization and analysis. Results: In most comparisons the differences between examiner’s annotations were below 0.1 ms, which is clinically acceptable. In four cases, it was found a large error and standard deviation (>0.1 ms) that indicate the presence of outliers and thus, discrepancies between examiners. Conclusions: Our results quantify the inter-examiner agreement and variability of the manual analysis of ABR data, and they also allows for the determination of different patterns of manual ABR analysis.