934 resultados para Auditory perception
Resumo:
The tendency to hear a tone sequence as 2 or more streams (segregated) builds up, but a sudden change in properties can reset the percept to 1 stream (integrated). This effect has not hitherto been explored using an objective measure of streaming. Stimuli comprised a 2.0-s fixed-frequency inducer followed by a 0.6-s test sequence of alternating pure tones (3 low [L]-high [H] cycles). Listeners compared intervals for which the test sequence was either isochronous or the H tones were slightly delayed. Resetting of segregation should make identifying the anisochronous interval easier. The HL frequency separation was varied (0-12 semitones), and properties of the inducer and test sequence were set to the same or different values. Inducer properties manipulated were frequency, number of onsets (several short bursts vs. one continuous tone), tone:silence ratio (short vs. extended bursts), level, and lateralization. All differences between the inducer and the L tones reduced temporal discrimination thresholds toward those for the no-inducer case, including properties shown previously not to affect segregation greatly. Overall, it is concluded that abrupt changes in a sequence cause resetting and improve subsequent temporal discrimination. (PsycINFO Database Record © 2009 APA, all rights reserved)
Resumo:
A harmonic that begins before the other harmonics contributes less than they do to vowel quality. This reduction can be partly reversed by accompanying the leading portion with a captor tone. This effect is usually interpreted as reflecting perceptual grouping of the captor with the leading portion. Instead, it has recently been proposed that the captor effect depends on broadband inhibition within the central auditory system. A test of psychophysical predictions based on this proposal showed that captor efficacy is (a) maintained for noise-band captors, (b) absent when a captor accompanies a harmonic that continues after the vowel, and (c) maintained for 80 ms or more over a gap between captor offset and vowel onset. These findings support and refine the inhibitory account. PsycINFO Database Record © 2006 APA, all rights reserved.
Resumo:
The factors influencing the stream segregation of discrete tones and the perceived continuity of discrete tones as continuing through an interrupting masker are well understood as separate phenomena. Two experiments tested whether perceived continuity can influence the build-up of stream segregation by manipulating the perception of continuity during an induction sequence and measuring streaming in a subsequent test sequence comprising three triplets of low and high frequency tones (LHL-…). For experiment 1, a 1.2-s standard induction sequence comprising six 100-ms L-tones strongly promoted segregation, whereas a single extended L-inducer (1.1 s plus 100-ms silence) did not. Segregation was similar to that following the single extended inducer when perceived continuity was evoked by inserting noise bursts between the individual tones. Reported segregation increased when the noise level was reduced such that perceived continuity no longer occurred. Experiment 2 presented a 1.3-s continuous inducer created by bridging the 100-ms silence between an extended L-inducer and the first test-sequence tone. This configuration strongly promoted segregation. Segregation was also increased by filling the silence after the extended inducer with noise, such that it was perceived like a bridging inducer. Like physical continuity, perceived continuity can promote or reduce test-sequence streaming, depending on stimulus context.
Resumo:
Onset asynchrony is an important cue for auditory scene analysis. For example, a harmonic of a vowel that begins before the other components contributes less to the perceived phonetic quality. This effect was thought primarily to involve high-level grouping processes, because the contribution can be partly restored by accompanying the leading portion of the harmonic (precursor) with a synchronous captor tone an octave higher, and hence too remote to influence adaptation of the auditory-nerve response to that harmonic. However, recent work suggests that this restoration effect arises instead from inhibitory interactions relatively early in central auditory processing. The experiments reported here have reevaluated the role of adaptation in grouping by onset asynchrony and explored further the inhibitory account of the restoration effect. Varying the frequency of the precursor in the range ± 10% relative to the vowel harmonic (Experiment 1), or introducing a silent interval from 0 to 320 ms between the precursor and the vowel (Experiment 2), both produce effects on vowel quality consistent with those predicted from peripheral adaptation or recovery from it. However, there were some listeners for whom even the smallest gap largely eliminated the effect of the precursor. Consistent with the inhibitory account of the restoration effect, a contralateral pure tone whose frequency is close to that of the precursor is highly effective at restoring the contribution of the asynchronous harmonic (Experiment 3). When the frequencies match, lateralization cues arising from binaural fusion of the precursor and contralateral tone may also contribute to this restoration. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Resumo:
Thirteen experiments investigated the dynamics of stream segregation. Experiments 1-6b used a similar method, where a same-frequency induction sequence (usually 10 repetitions of an identical pure tone) promoted segregation in a subsequent, briefer test sequence (of alternating low- and high-frequency tones). Experiments 1-2 measured streaming using a direct report of perception and a temporal-discrimination task, respectively. Creating a single deviant by altering the final inducer (e.g. in level or replacement with silence) reduced segregation, often substantially. As the prior inducers remained unaltered, it is proposed that the single change actively reset build-up. The extent of resetting varied gradually with the size of a frequency change, once noticeable (experiments 3a-3b). By manipulating the serial position of a change, experiments 4a-4b demonstrated that resetting only occurred when the final inducer was replaced with silence, as build-up is very rapid during a same-frequency induction sequence. Therefore, the observed resetting cannot be explained by fewer inducers being presented. Experiment 5 showed that resetting caused by a single deviant did not increase when prior inducers were made unpredictable in frequency (four-semitone range). Experiments 6a-6b demonstrated that actual and perceived continuity have a similar effect on subsequent streaming judgements promoting either integration or segregation, depending on listening context. Experiment 7 found that same-frequency inducers were considerably more effective at promoting segregation than an alternating-frequency inducer, and that a trend for deviant-tone resetting was only apparent for the same-frequency case. Using temporal-order judgments, experiments 8-9 demonstrated the stream segregation of pure-tone-like percepts, evoked by sudden changes in amplitude or interaural time difference for individual components of a complex tone, Active resetting was observed when a deviant was inserted into a sequence of these percepts (Experiment 10). Overall, these experiments offer new insight into the segregation-promotIng effect of induction sequences, and the factors which can reset this effect.
Resumo:
Our goal was to investigate auditory and speech perception abilities of children with and without reading disability (RD) and associations between auditory, speech perception, reading, and spelling skills. Participants were 9-year-old, Finnish-speaking children with RD (N = 30) and typically reading children (N = 30). Results showed significant group differences between the groups in phoneme duration discrimination but not in perception of amplitude modulation and rise time. Correlations among rise time discrimination, phoneme duration, and spelling accuracy were found for children with RD. Those children with poor rise time discrimination were also poor in phoneme duration discrimination and in spelling. Results suggest that auditory processing abilities could, at least in some children, affect speech perception skills, which in turn would lead to phonological processing deficits and dyslexia.
Resumo:
Sensory processing is a crucial underpinning of the development of social cognition, a function which is compromised in variable degree in patients with pervasive developmental disorders (PDD). In this manuscript, we review some of the most recent and relevant contributions, which have looked at auditory sensory processing derangement in PDD. The variability in the clinical characteristics of the samples studied so far, in terms of severity of the associated cognitive deficits and associated limited compliance, underlying aetiology and demographic features makes a univocal interpretation arduous. We hypothesise that, in patients with severe mental deficits, the presence of impaired auditory sensory memory as expressed by the mismatch negativity could be a non-specific indicator of more diffuse cortical deficits rather than causally related to the clinical symptomatology. More consistent findings seem to emerge from studies on less severely impaired patients, in whom increased pitch perception has been interpreted as an indicator of increased local processing, probably as compensatory mechanism for the lack of global processing (central coherence). This latter hypothesis seems extremely attractive and future trials in larger cohorts of patients, possibly standardising the characteristics of the stimuli are a much-needed development. Finally, specificity of the role of the auditory derangement as opposed to other sensory channels needs to be assessed more systematically using multimodal stimuli in the same patient group. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A sequence of constant-frequency tones can promote streaming in a subsequent sequence of alternating-frequency tones, but why this effect occurs is not fully understood and its time course has not been investigated. Experiment 1 used a 2.0-s-long constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence substantially reduced reported test-sequence segregation. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets build-up, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only three tones—this contrasts with the more gradual build-up typically observed for alternating-frequency sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ~10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an ongoing, preestablished stream, and that a deviant tone may reduce segregation by disrupting this capture. These findings offer new insight into the dynamics of stream segregation, and have implications for the neural basis of streaming and the role of attention in stream formation. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
Resumo:
Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
Speech perception routinely takes place in noisy or degraded listening environments, leading to ambiguity in the identity of the speech token. Here, I present one review paper and two experimental papers that highlight cognitive and visual speech contributions to the listening process, particularly in challenging listening environments. First, I survey the literature linking audiometric age-related hearing loss and cognitive decline and review the four proposed causal mechanisms underlying this link. I argue that future research in this area requires greater consideration of the functional overlap between hearing and cognition. I also present an alternative framework for understanding causal relationships between age-related declines in hearing and cognition, with emphasis on the interconnected nature of hearing and cognition and likely contributions from multiple causal mechanisms. I also provide a number of testable hypotheses to examine how impairments in one domain may affect the other. In my first experimental study, I examine the direct contribution of working memory (through a cognitive training manipulation) on speech in noise comprehension in older adults. My results challenge the efficacy of cognitive training more generally, and also provide support for the contribution of sentence context in reducing working memory load. My findings also challenge the ubiquitous use of the Reading Span test as a pure test of working memory. In a second experimental (fMRI) study, I examine the role of attention in audiovisual speech integration, particularly when the acoustic signal is degraded. I demonstrate that attentional processes support audiovisual speech integration in the middle and superior temporal gyri, as well as the fusiform gyrus. My results also suggest that the superior temporal sulcus is sensitive to intelligibility enhancement, regardless of how this benefit is obtained (i.e., whether it is obtained through visual speech information or speech clarity). In addition, I also demonstrate that both the cingulo-opercular network and motor speech areas are recruited in difficult listening conditions. Taken together, these findings augment our understanding of cognitive contributions to the listening process and demonstrate that memory, working memory, and executive control networks may flexibly be recruited in order to meet listening demands in challenging environments.
Resumo:
Harmony is one of the main objectives in surgical and orthodontic treatment and this harmony must be present in the smile, as well as in the face. The aim of the present study was to assess the perceptions of professionals and laypersons in relation to the harmony of the smile of patients with or without vertical maxillary alterations. Sixty observers (oral and maxillofacial surgeons, orthodontists and laypersons) reported the degree of harmony of six smiles using an objective questionnaire and the participants indicated if there was a need for corrective surgery or not. The classification of observers was recorded on a Likert scale from 1 to 5. Mixed regression was used to determine differences between the three groups. Statistically significant differences were found only for the harmony of the smile between the oral and maxillofacial surgeons and laypersons, with laypersons being more critical when assessing the smile. There was no statistical difference between the other groups for the harmony of the smile or the indication of corrective surgery. The patterns of greater or lesser harmony determined by observers during the smile were similar to those found in the literature as the ideal standard in relation to vertical maxillary positioning. Laypersons had a tendency to be more critical in relation to facial harmony than surgeons, although no statistical differences were found in the other groups in relation to the harmony of the smile or indication for the corrective surgery. In addition, the patterns of greater or lesser harmony of the smile determined by the participants were similar to those found in the literature as the ideal standard in relation to vertical maxillary positioning. Overall, the present study demonstrates that adequate interaction between surgeons, orthodontists and laypersons is essential in order to achieve facial harmony with orthodontic and/or surgical treatment. Opinion of specialists and laypersons about the smile in relation to the vertical positioning of the maxilla.
Resumo:
Mindfulness is a practice and a form of consciousness which has been the basis for innovative interventions in care and health promotion. This study presents mindfulness, describes and discusses the process of cultural adaptation of The Freiburg Mindfulness Inventory (FMI) to Brazilian Portuguese. From the original version of this pioneering instrument for assessing mindfulness two translations and two back-translations were made. These were evaluated by a committee of 14 experts (Buddhists, linguists, health professionals), who helped to create two versions for the first pre-test, based on which suggestions were made by a sample of 41 people of the population through interviews. Considering the difficulties in understanding the concepts that are unfamiliar to the Brazilian culture, a new version was prepared with additional explanations, which underwent a further evaluation of the experts and a second pre-test with 72 people. This process aimed at addressing the limitations and challenges of evaluating mindfulness in a country of western culture through a self-report instrument based on Buddhist psychology. With appropriate levels of clarity and equivalence with the original instrument, the Freiburg Mindfulness Inventory adapted for Brazil is presented.