10 resultados para auditory attention detection
em CentAUR: Central Archive University of Reading - UK
Resumo:
Background noise should in theory hinder detection of auditory cues associated with approaching danger. We tested whether foraging chaffinches Fringilla coelebs responded to background noise by increasing vigilance, and examined whether this was explained by predation risk compensation or by a novel stimulus hypothesis. The former predicts that only inter-scan interval should be modified in the presence of background noise, not vigilance levels generally. This is because noise hampers auditory cue detection and increases perceived predation risk primarily when in the head-down position, and also because previous tests have shown that only interscan interval is correlated with predator detection ability in this system. Chaffinches only modified interscan interval supporting this hypothesis. At the same time they made significantly fewer pecks when feeding during the background noise treatment and so the increased vigilance led to a reduction in intake rate, suggesting that compensating for the increased predation risk could indirectly lead to a fitness cost. Finally, the novel stimulus hypothesis predicts that chaffinches should habituate to the noise, which did not occur within a trial or over 5 subsequent trials. We conclude that auditory cues may be an important component of the trade-off between vigilance and feeding, and discuss possible implications for anti-predation theory and ecological processes
Resumo:
Listeners can attend to one of several simultaneous messages by tracking one speaker’s voice characteristics. Using differences in the location of sounds in a room, we ask how well cues arising from spatial position compete with these characteristics. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played simultaneously with a different “distracter” context. Talker difference was in competition with position difference, so the response indicates which cue‐type the listener was tracking. Spatial position was found to override talker difference in dichotic conditions when the talkers are similar (male). The salience of cues associated with differences in sounds, bearings decreased with distance between listener and sources. These cues are more effective binaurally. However, there appear to be other cues that increase in salience with distance between sounds. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. Distances between spectra calculated using a gammatone filterbank (with ERB‐spaced CFs) of the room’s impulse responses at different locations were computed, and comparison with listeners’ responses suggested some slight monaural loudness cues, but also monaural “timbre” cues arising from the temporal‐ and spectral‐envelope differences in the speech from different locations.
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
We argue that hyper-systemizing predisposes individuals to show talent, and review evidence that hyper-systemizing is part of the cognitive style of people with autism spectrum conditions (ASC). We then clarify the hyper-systemizing theory, contrasting it to the weak central coherence (WCC) and executive dysfunction (ED) theories. The ED theory has difficulty explaining the existence of talent in ASC. While both hyper-systemizing and WCC theories postulate excellent attention to detail, by itself excellent attention to detail will not produce talent. By contrast, the hyper-systemizing theory argues that the excellent attention to detail is directed towards detecting 'if p, then q' rules (or [input-operation-output] reasoning). Such law-based pattern recognition systems can produce talent in systemizable domains. Finally, we argue that the excellent attention to detail in ASC is itself a consequence of sensory hypersensitivity. We review an experiment from our laboratory demonstrating sensory hypersensitivity detection thresholds in vision. We conclude that the origins of the association between autism and talent begin at the sensory level, include excellent attention to detail and end with hyper-systemizing.
Resumo:
Background: As people age, language-processing ability changes. While several factors modify discourse comprehension ability in older adults, syntactic complexity of auditory discourse has received scant attention. This is despite the widely researched domain of syntactic processing of single sentences in older adults. Aims: The aims of this study were to investigate the ability of healthy older adults to understand stories that differed in syntactic complexity, and its relation to working memory. Methods & Procedures: A total of 51 healthy adults (divided into three age groups) took part. They listened to brief stories (syntactically simple and syntactically complex) and had to respond to false/true comprehension probes following each story. Working memory capacity (digit span, forward and backward) was also measured. Outcomes & Results: Differences were found in the ability of healthy older adults to understand simple and complex discourse. The complex discourse in particular was more sensitive in discerning age-related language patterns. Only the complex discourse task correlated moderately with age. There was no correlation between age and simple discourse. As far as working memory is concerned, moderate correlations were found between working memory and complex discourse. Education did not correlate with discourse, neither simple, nor complex. Conclusions: Older adults may be less efficient in forming syntactically complex representations and this may be influenced by limitations in working memory.
Resumo:
A change detection paradigm was used to estimate the role of explicit change detection in the generation of the irrelevant spatial stimulus coding underlying the Simon effect. In one condition, no blank was interposed between two successive displays, which produced efficient change detection. In another condition, the presence of a blank frame produced a robust change blindness effect, which is crucially assumed to occur as the consequence of impaired attentional orienting to the change location. The results showed a strong Simon-like effect under conditions of efficient change detection. By contrast, no Simon-like effect was observed under conditions of change blindness, namely when attention shifting towards the change location was hampered. Experiment 2 supported this pattern by showing that a Simon-like effect could be observed when the blank was present, but only when participants detected the change by means of a cue that was informative as to change location. Overall, our findings show that a Simon-like effect can only be observed under conditions of explicit change detection, likely because a shift of attention towards the change location has occurred.
Resumo:
Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. Methods: We examined analysis of basic acoustic stimuli in Wernicke’s aphasia participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in “moving ripple” stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Results: Participants with Wernicke’s aphasia showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both frequency and dynamic modulation detection correlated significantly with auditory comprehension abilities in the Wernicke’s aphasia participants. Conclusion: These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectrotemporal nonverbal stimuli in Wernicke’s aphasia, which may have a causal contribution to the auditory language comprehension impairment Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.
Resumo:
(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.
Resumo:
Abstract Background: The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behaviour, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analysing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. Significant differences in inter-examiner results may lead to completely distinct clinical interpretations of the state of the auditory system. In this context, the aim of this research was to evaluate the inter-examiner agreement and variability in the manual classification of ABR. Methods: A total of 160 ABR data samples were collected, for four different stimulus intensity (80dBHL, 60dBHL, 40dBHL and 20dBHL), from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). Four examiners with expertise in the manual classification of ABR components participated in the study. The Bland-Altman statistical method was employed for the assessment of inter-examiner agreement and variability. The mean, standard deviation and error for the bias, which is the difference between examiners’ annotations, were estimated for each pair of examiners. Scatter plots and histograms were employed for data visualization and analysis. Results: In most comparisons the differences between examiner’s annotations were below 0.1 ms, which is clinically acceptable. In four cases, it was found a large error and standard deviation (>0.1 ms) that indicate the presence of outliers and thus, discrepancies between examiners. Conclusions: Our results quantify the inter-examiner agreement and variability of the manual analysis of ABR data, and they also allows for the determination of different patterns of manual ABR analysis.
Resumo:
Irrelevant sound accompanying the processes of encoding and retrieval of verbal events impairs memory performance. However, the degree of impairment is highly dependent on a range of factors. Some of them lie outside rememberers’ control, like the semantic content of distracting sound or the nature of a test used to assess memory. Others, like a strategy used to encode memoranda, rest under control of the rememberer. In this paper the factors that modulate memory impairment are outlined and discussed in terms of multiple mechanisms contributing to memory impairment under auditory distraction. The mechanisms of a capture of attention by distraction, interference of automatic seriation of distraction and voluntary seriation of memoranda, semantic inhibition of distraction, and blocking of memoranda by semantically related distracters are described. Results that demonstrate how these mechanisms determine memory impairment under auditory distraction are also discussed. Particular attention is devoted to the possibility of voluntary control over the workings of these mechanisms and the conditions under which the negative impact of auditory distraction upon memory performance could be minimised.