162 resultados para Auditory perceptual disorders
em CentAUR: Central Archive University of Reading - UK
Resumo:
Non-word repetition (NWR) was investigated in adolescents with typical development, Specific Language Impairment (SLI) and Autism Plus language Impairment (ALI) (n = 17, 13, 16, and mean age 14;4, 15;4, 14;8 respectively). The study evaluated the hypothesis that poor NWR performance in both groups indicates an overlapping language phenotype (Kjelgaard & Tager-Flusberg, 2001). Performance was investigated both quantitatively, e.g. overall error rates, and qualitatively, e.g. effect of length on repetition, proportion of errors affecting phonological structure, and proportion of consonant substitutions involving manner changes. Findings were consistent with previous research (Whitehouse, Barry, & Bishop, 2008) demonstrating a greater effect of length in the SLI group than the ALI group, which may be due to greater short-term memory limitations. In addition, an automated count of phoneme errors identified poorer performance in the SLI group than the ALI group. These findings indicate differences in the language profiles of individuals with SLI and ALI, but do not rule out a partial overlap. Errors affecting phonological structure were relatively frequent, accounting for around 40% of phonemic errors, but less frequent than straight Consonant-for-Consonant or vowel-for-vowel substitutions. It is proposed that these two different types of errors may reflect separate contributory mechanisms. Around 50% of consonant substitutions in the clinical groups involved manner changes, suggesting poor auditory-perceptual encoding. From a clinical perspective algorithms which automatically count phoneme errors may enhance sensitivity of NWR as a diagnostic marker of language impairment. Learning outcomes: Readers will be able to (1) describe and evaluate the hypothesis that there is a phenotypic overlap between SLI and Autism Spectrum Disorders (2) describe differences in the NWR performance of adolescents with SLI and ALI, and discuss whether these differences support or refute the phenotypic overlap hypothesis, and (3) understand how computational algorithms such as the Levenshtein Distance may be used to analyse NWR data.
Resumo:
Perceptual compensation for reverberation was measured by embedding test words in contexts that were either spoken phrases or processed versions of this speech. The processing gave steady-spectrum contexts with no changes in the shape of the short-term spectral envelope over time, but with fluctuations in the temporal envelope. Test words were from a continuum between "sir" and "stir." When the amount of reverberation in test words was increased, to a level above the amount in the context, they sounded more like "sir." However, when the amount of reverberation in the context was also increased, to the level present in the test word, there was perceptual compensation in some conditions so that test words sounded more like "stir" again. Experiments here found compensation with speech contexts and with some steady-spectrum contexts, indicating that fluctuations in the context's temporal envelope can be sufficient for compensation. Other results suggest that the effectiveness of speech contexts is partly due to the narrow-band "frequency-channels" of the auditory periphery, where temporal-envelope fluctuations can be more pronounced than they are in the sound's broadband temporal envelope. Further results indicate that for compensation to influence speech, the context needs to be in a broad range of frequency channels. (c) 2007 Acoustical Society of America.
Resumo:
Perceptual constancy effects are observed when differing amounts of reverberation are applied to a context sentence and a test‐word embedded in it. Adding reverberation to members of a “sir”‐“stir” test‐word continuum causes temporal‐envelope distortion, which has the effect of eliciting more sir responses from listeners. If the same amount of reverberation is also applied to the context sentence, the number of sir responses decreases again, indicating an “extrinsic” compensation for the effects of reverberation. Such a mechanism would effect perceptual constancy of phonetic perception when temporal envelopes vary in reverberation. This experiment asks whether such effects precede or follow grouping. Eight auditory‐filter shaped noise‐bands were modulated with the temporal envelopes that arise when speech is played through these filters. The resulting “gestalt” percept is the appropriate speech rather than the sound of noise‐bands, presumably due to across‐channel “grouping.” These sounds were played to listeners in “matched” conditions, where reverberation was present in the same bands in both context and test‐word, and in “mismatched” conditions, where the bands in which reverberation was added differed between context and test‐word. Constancy effects were obtained in matched conditions, but not in mismatched conditions, indicating that this type of constancy in hearing precedes across‐channel grouping.
Resumo:
There is a high prevalence of traumatic events within individuals diagnosed with schizophrenia, and of auditory hallucinations within individuals diagnosed with posttraumatic stress disorder (PTSD). However, the relationship between the symptoms associated with these disorders remains poorly understood. We conducted a multidimensional assessment of auditory hallucinations within a sample diagnosed with schizophrenia and substance abuse, both with and without co-morbid PTSD. Results suggest a rate of co-morbid PTSD similar to those reported within other studies. Patients who suffered co-morbid PTSD reported more distressing auditory hallucinations. However, the hallucinations were not more frequent or of longer duration. The need for a multidimensional assessment is supported. Results are discussed within current theoretical accounts of traumatic psychosis.
Resumo:
Three experiments measured constancy in speech perception, using natural-speech messages or noise-band vocoder versions of them. The eight vocoder-bands had equally log-spaced center-frequencies and the shapes of corresponding “auditory” filters. Consequently, the bands had the temporal envelopes that arise in these auditory filters when the speech is played. The “sir” or “stir” test-words were distinguished by degrees of amplitude modulation, and played in the context; “next you’ll get _ to click on.” Listeners identified test-words appropriately, even in the vocoder conditions where the speech had a “noise-like” quality. Constancy was assessed by comparing the identification of test-words with low or high levels of room reflections across conditions where the context had either a low or a high level of reflections. Constancy was obtained with both the natural and the vocoded speech, indicating that the effect arises through temporal-envelope processing. Two further experiments assessed perceptual weighting of the different bands, both in the test word and in the context. The resulting weighting functions both increase monotonically with frequency, following the spectral characteristics of the test-word’s [s]. It is suggested that these two weighting functions are similar because they both come about through the perceptual grouping of the test-word’s bands.
Resumo:
(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.
Resumo:
Abstract Background: The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behaviour, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analysing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. Significant differences in inter-examiner results may lead to completely distinct clinical interpretations of the state of the auditory system. In this context, the aim of this research was to evaluate the inter-examiner agreement and variability in the manual classification of ABR. Methods: A total of 160 ABR data samples were collected, for four different stimulus intensity (80dBHL, 60dBHL, 40dBHL and 20dBHL), from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). Four examiners with expertise in the manual classification of ABR components participated in the study. The Bland-Altman statistical method was employed for the assessment of inter-examiner agreement and variability. The mean, standard deviation and error for the bias, which is the difference between examiners’ annotations, were estimated for each pair of examiners. Scatter plots and histograms were employed for data visualization and analysis. Results: In most comparisons the differences between examiner’s annotations were below 0.1 ms, which is clinically acceptable. In four cases, it was found a large error and standard deviation (>0.1 ms) that indicate the presence of outliers and thus, discrepancies between examiners. Conclusions: Our results quantify the inter-examiner agreement and variability of the manual analysis of ABR data, and they also allows for the determination of different patterns of manual ABR analysis.
Resumo:
Short-term memory (STM) impairments are prevalent in adults with acquired brain injuries. While there are several published tests to assess these impairments, the majority require speech production, e.g. digit span (Wechsler, 1987). This feature may make them unsuitable for people with aphasia and motor speech disorders because of word finding difficulties and speech demands respectively. If patients perceive the speech demands of the test to be high, the may not engage with testing. Furthermore, existing STM tests are mainly ‘pen-and-paper’ tests, which can jeopardise accuracy. To address these shortcomings, we designed and standardised a novel computerised test that does not require speech output and because of the computerised delivery it would enable clinicians identify STM impairments with greater precision than current tests. The matching listening span tasks, similar to the non-normed PALPA 13 (Kay, Lesser & Coltheart, 1992) is used to test short-term memory for serial order of spoken items. Sequences of digits are presented in pairs. The person hears the first sequence, followed by the second sequence and s/he decides whether the two sequences are the same or different. In the computerised test, the sequences are presented in live voice recordings on a portable computer through a software application (Molero Martin, Laird, Hwang & Salis 2013). We collected normative data from healthy older adults (N=22-24) using digits, real words (one- and two-syllables) and non-words (one- and two- syllables). Their performance was scored following two systems. The Highest Span system was the highest span length (e.g. 2-8) at which a participant correctly responded to over 7 out of 10 trials at the highest sequence length. Test re-test reliability was also tested in a subgroup of participants. The test will be available as free of charge for clinicians and researchers to use.
Resumo:
Subdermal magnetic implants originated as an art form in the world of body modification. To date an in depth scientific analysis of the benefits of this implant has yet to be established. This research explores the concept of sensory extension of the tactile sense utilising this form of implantation. This relatively simple procedure enables the tactile sense to respond to static and alternating magnetic fields. This is not to say that the underlying biology of the system has changed; i.e. the concept does not increase our tactile frequency response range or sensitivity to pressure, but now does invoke a perceptual response to a stimulus that is not innately available to humans. Within this research two social surveys have been conducted in order to ascertain one, the social acceptance of the general notion of human enhancement, and two the perceptual experiences of individuals with the magnetic implants themselves. In terms of acceptance to the notion of sensory improvement (via implantation) ~39% of the general population questioned responded positively with a further ~25% of the respondents answering with the indecisive response. Thus with careful dissemination a large proportion of individuals may adopt this technology much like this if it were to become available for consumers. Interestingly of the responses collected from the magnetic implants survey ~60% of the respondents actually underwent the implant for magnetic vision purposes. The main contribution of this research however comes from a series of psychophysical testing. In which 7 subjects with subdermal magnetic implants, were cross compared with 7 subjects that had similar magnets superficially attached to their dermis. The experimentation examined multiple psychometric thresholds of the candidates including intensity, frequency and temporal. Whilst relatively simple, the experimental setup for the perceptual experimentation conducted was novel in that custom hardware and protocols were created in order to determine the subjective thresholds of the individuals. Abstract iv The overall purpose of this research is to utilise this concept in high stress scenarios, such as driving or piloting; whereby alerts and warnings could be relayed to an operator without intruding upon their other (typically overloaded) exterior senses (i.e. the auditory and visual senses). Hence each of the thresholding experiments were designed with the intention of utilising the results in the design of signals for information transfer. The findings from the study show that the implanted group of subjects significantly outperformed the superficial group in the absolute intensity threshold experiment, i.e. the implanted group required significantly less force than the superficial group in order to perceive the stimulus. The results for the frequency difference threshold showed no significant difference in the two groups tested. Interestingly however at low frequencies, i.e. 20 and 50 Hz, the ability of the subjects tested to discriminate frequencies significantly increased with more complex waveforms i.e. square and sawtooth, when compared against the typically used sinewave. Furthermore a novel protocol for establishing the temporal gap detection threshold during a temporal numerosity study has been established in this thesis. This experiment measured the subjects’ capability to correctly determine the number of concatenated signals presented to them whilst the time between the signals, referred to as pulses, tended to zero. A significant finding was that when altering the length of, the frequency of, and the number of cycles of the pulses, the time between pulses for correct recognition altered. This finding will ultimately aid in the design of the tactile alerts for this method of information transfer. Preliminary development work for the use of this method of input to the body, in an automotive scenario, is also presented within this thesis in the form of a driving simulation. The overall goal of which is to present warning alerts to a driver, such as rear-to-end collision, or excessive speeds on roads, in order to prevent incidents and penalties from occurring. Discussion on the broader utility of this implant has been presented, reflecting on its potential use as a basis for vibrotactile, and sensory substitution, devices. This discussion furthers with postulations on its use as a human machine interface, as well as how a similar implant could be used within the ear as a hearing aid device.
Resumo:
What this paper adds? What is already known on the subject? Multi-sensory treatment approaches have been shown to impact outcome measures positively, such as accuracy of speech movement patterns and speech intelligibility in adults with motor speech disorders, as well as in children with apraxia of speech, autism and cerebral palsy. However, there has been no empirical study using multi-sensory treatment for children with speech sound disorders (SSDs) who demonstrate motor control issues in the jaw and orofacial structures (e.g. jaw sliding, jaw over extension, inadequate lip rounding/retraction and decreased integration of speech movements). What this paper adds? Findings from this study indicate that, for speech production disorders where both the planning and production of spatiotemporal parameters of movement sequences for speech are disrupted, multi-sensory treatment programmes that integrate auditory, visual and tactile–kinesthetic information improve auditory and visual accuracy of speech production. The training (practised in treatment) and test words (not practised in treatment) both demonstrated positive change in most participants, indicating generalization of target features to untrained words. It is inferred that treatment that focuses on integrating multi-sensory information and normalizing parameters of speech movements is an effective method for treating children with SSDs who demonstrate speech motor control issues.
Resumo:
In this research, a cross-model paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were presented simultaneously with liked and disliked odours and sounds, with neutral-neutral pairings serving as controls. The results confirm previous findings that the affective evaluation of previously neutral visual stimuli shifts in the direction of contingently presented affective auditory stimuli. In addition, this research shows the presence of conditioning with affective odours having no logical connection with the pictures.
Resumo:
Three experiments investigated irrelevant sound interference of lip-read lists. In Experiment 1, an acoustically changing sequence of nine irrelevant utterances was more disruptive to spoken immediate identification of lists of nine lip-read digits than nine repetitions of the same utterances (the changing-state effect; Jones, Madden, & Miles, 1992). Experiment 2 replicated this finding when lip-read items were sampled with replacement from the nine digits to form the lip-read lists. In Experiment 3, when the irrelevant sound was confined to the retention interval of a delayed recall task, a changing-state pattern of disruption also occurred. Results confirm a changing-state effect in memory for lip-read items but also point to the possibility that, for lip-reading, changing-state effects may occur at an earlier, perceptual stage.
Resumo:
Four experiments investigate the hypothesis that irrelevant sound interferes with serial recall of auditory items in the same fashion as with visually presented items. In Experiment 1 an acoustically changing sequence of 30 irrelevant utterances was more disruptive than 30 repetitions of the same utterance (the changing-state effect; Jones, Madden, & Miles, 1992) whether the to-be-remembered items were visually or auditorily presented. Experiment 2 showed that two different utterances spoken once (a heterogeneous compound suffix; LeCompte & Watkins, 1995) produced less disruption to serial recall than 15 repetitions of the same sequence. Disruption thus depends on the number of sounds in the irrelevant sequence. In Experiments 3a and 3b the number of different sounds, the "token-set" size (Tremblay & Jones, 1998), in an irrelevant sequence also influenced the magnitude of disruption in both irrelevant sound and compound suffix conditions. The results support the view that the disruption of memory for auditory items, like memory for visually presented items, is dependent on the number of different irrelevant sounds presented and the size of the set from which these sounds are taken. Theoretical implications are discussed.
Resumo:
Three experiments examined transfer across form (words/pictures) and modality (visual/ auditory) in written word, auditory word, and pictorial implicit memory tests, as well as on a free recall task. Experiment 1 showed no significant transfer across form on any of the three implicit memory tests,and an asymmetric pattern of transfer across modality. In contrast, the free recall results revealed a very different picture. Experiment 2 further investigated the asymmetric modality effects obtained for the implicit memory measures by employing articulatory suppression and picture naming to control the generation of phonological codes. Finally, Experiment 3 examined the effects of overt word naming and covert picture labelling on transfer between study and test form. The results of the experiments are discussed in relation to Tulving and Schacter's (1990) Perceptual Representation Systems framework and Roediger's (1990) Transfer Appropriate Processing theory.