875 resultados para auditory cues


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Disorders in gait are identified in Parkinson’s disease patients. As a result, the capacity of walking independently and the interaction with the environment can be impairment. So, the auditory cues have been utilized as a non-pharmacological treatment to improve the locomotor impairment of the PD patients. However, these effects were observed in the regular lands and it’s not known the effects of auditory cues in gait during avoidance obstacles that could be more threaten for these patients. Yet, few studies in the literature compare the Parkinson’s disease patients with the older adults during the locomotor tasks and obstacle avoidance in association with the effects of auditory cues. The aim of the study is to compare the effects of the auditory cues in the gait and during obstacle avoidance in PD patients and older adults. 30 subjects distributed in two groups (Group 1 - 15, Parkinson’s disease patients; Group 2 - 15, healthy older adults) are going to participate of this study. After the participation approval, the assessment of clinical condition will be done by a physician. So, to investigate the locomotor pattern, it will be done a kinematic analysis. The experimental task is to walk on 8 m pathway and 18 trials will be done (6 for the free gait and 12 for adaptive gait). For the adaptive gait, two different obstacle heights will be manipulated: high obstacle (HO) and low obstacle (LO). In order to verify possible differences between the groups and the experimental condition, multivariance tests will be used with a significance level of 0.05. MANOVA revealed effect of condition and task. Thus, with DA, we observed an increase in cadence and reduced single support and stride length. When the tasks were compared, it was observed that the LO task, subjects had lower velocity and stride length... 9Complete abstract click electronic access below)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

1. With the global increase in CO2 emissions, there is a pressing need for studies aimed at understanding the effects of ocean acidification on marine ecosystems. Several studies have reported that exposure to CO2 impairs chemosensory responses of juvenile coral reef fishes to predators. Moreover, one recent study pointed to impaired responses of reef fish to auditory cues that indicate risky locations. These studies suggest that altered behaviour following exposure to elevated CO2 is caused by a systemic effect at the neural level. 2. The goal of our experiment was to test whether juvenile damselfish Pomacentrus amboinensis exposed to different levels of CO2 would respond differently to a potential threat, the sight of a large novel coral reef fish, a spiny chromis, Acanthochromis polyancanthus, placed in a watertight bag. 3. Juvenile damselfish exposed to 440 (current day control), 550 or 700 µatm CO2 did not differ in their response to the chromis. However, fish exposed to 850 µatm showed reduced antipredator responses; they failed to show the same reduction in foraging, activity and area use in response to the chromis. Moreover, they moved closer to the chromis and lacked any bobbing behaviour typically displayed by juvenile damselfishes in threatening situations. 4. Our results are the first to suggest that response to visual cues of risk may be impaired by CO2 and provide strong evidence that the multi-sensory effects of CO2 may stem from systematic effects at the neural level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study focuses on the interactional functions of non-standard spelling, in particular letter repetition, used in text-based computer-mediated communication as a means of non-verbal signalling. The aim of this paper is to assess the current state of non-verbal cue research in computer-mediated discourse and demonstrate the need for a more comprehensive and methodologically rigorous exploration of written non-verbal signalling. The study proposes a contextual and usage-centered view of written paralanguage. Through illustrative, close linguistic analyses the study proves that previous approaches to non-standard spelling based on their relation to the spoken word might not account for the complexities of this CMC cue, and in order to further our understanding of their interactional functions it is more fruitful to describe the role they play during the contextualisation of the verbal messages. The interactional sociolinguistic approach taken in the analysis demonstrates the range of interactional functions letter repetition can achieve, including contribution to the inscription of socio-emotional information into writing, to the evoking of auditory cues or to a display of informality through using a relaxed writing style.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advanced age may become a limiting factor for the maintenance of rhythms in organisms, reducing the capacity of generation and synchronization of biological rhythms. In this study, the influence of aging on the expression of endogenous periodicity and synchronization (photic and social) of the circadian activity rhythm (CAR) was evaluated in a diurnal primate, the marmoset (Callithrix jacchus). This study had two approaches: one with longitudinal design, performed with a male marmoset in two different phases: adult (three years) and older (9 y.o.) (study 1) and the second, a transversal approach, with 6 old (♂: 9.7 ± 2.0 y.o.) and 11 adults animals (♂: 4.2 ± 0.8 y.o.) (study 2). The evaluation of the photic synchronization involved two conditions in LD (natural and artificial illuminations). In study 1, the animal was subjected to the following stages: LD (12:12 ~ 350: ~ 2 lx), LL (~ 350 lx) and LD resynchronization. In the second study, the animals were initially evaluated in natural LD, and then the same sequence stages of study 1. During the LL stage in study 2, the vocalizations of conspecifics kept in natural LD on the outside of the colony were considered temporal cue to the social synchronization. The record of the activity was performed automatically at intervals of five minutes through infrared sensor and actimeters, in studies 1 and 2, respectively. In general, the aged showed a more fragmented activity pattern (> IV < H and > PSD, ANOVA, p < 0.05), lower levels of activity (ANOVA, p < 0.05) and shorter duration of active phase (ANOVA, p < 0.05) in LD conditions, when compared to adults. In natural LD, the aged presented phase delay pronounced for onset and offset of active phase (ANOVA, p < 0.05), while the adults had the active phase more adjusted to light phase. Under artificial LD, there was phase advance and greater adjustment of onset and offset of activity in relation to the LD in the aged (ANOVA, p < 0.05). In LL, there was a positive correlation between age and the endogenous period () in the first 20 days (Spearman correlation, p < 0.05), with prolonged  held in two aged animals. In this condition, most adults showed free-running period of the circadian activity rhythm with  < 24 h for the first 30 days and later on relative coordination mediated by auditory cues. In study 2, the cross-correlation analysis between the activity profiles of the animals in LL with control animals kept under natural LD, found that there was less social synchronization in the aged. With the resubmission to the LD, the resynchronization rate was slower in the aged (t-test; p < 0.05) and in just one aged animal there was a loss of resynchronization capability. According to the data set, it is suggested that the aging in marmosets may be related to: 1) lower amplitude and greater fragmentation of the activity, accompanied to phase delay with extension of period, caused by changes in a photic input, in the generation and behavioral expression of the CAR; 2) lower capacity of the circadian activity rhythm to photic synchronization, that can become more robust in artificial lighting conditions, possibly due to the higher light intensities at the beginning of the active phase due to the abrupt transitions between the light and dark phases; and 3) smaller capacity of non-photic synchronization for auditory cues from conspecifics, possibly due to reducing sensory inputs and responsiveness of the circadian oscillators to auditory cues, what can make the aged marmoset most vulnerable, as these social cues may act as an important supporting factor for the photic synchronization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advanced age may become a limiting factor for the maintenance of rhythms in organisms, reducing the capacity of generation and synchronization of biological rhythms. In this study, the influence of aging on the expression of endogenous periodicity and synchronization (photic and social) of the circadian activity rhythm (CAR) was evaluated in a diurnal primate, the marmoset (Callithrix jacchus). This study had two approaches: one with longitudinal design, performed with a male marmoset in two different phases: adult (three years) and older (9 y.o.) (study 1) and the second, a transversal approach, with 6 old (♂: 9.7 ± 2.0 y.o.) and 11 adults animals (♂: 4.2 ± 0.8 y.o.) (study 2). The evaluation of the photic synchronization involved two conditions in LD (natural and artificial illuminations). In study 1, the animal was subjected to the following stages: LD (12:12 ~ 350: ~ 2 lx), LL (~ 350 lx) and LD resynchronization. In the second study, the animals were initially evaluated in natural LD, and then the same sequence stages of study 1. During the LL stage in study 2, the vocalizations of conspecifics kept in natural LD on the outside of the colony were considered temporal cue to the social synchronization. The record of the activity was performed automatically at intervals of five minutes through infrared sensor and actimeters, in studies 1 and 2, respectively. In general, the aged showed a more fragmented activity pattern (> IV < H and > PSD, ANOVA, p < 0.05), lower levels of activity (ANOVA, p < 0.05) and shorter duration of active phase (ANOVA, p < 0.05) in LD conditions, when compared to adults. In natural LD, the aged presented phase delay pronounced for onset and offset of active phase (ANOVA, p < 0.05), while the adults had the active phase more adjusted to light phase. Under artificial LD, there was phase advance and greater adjustment of onset and offset of activity in relation to the LD in the aged (ANOVA, p < 0.05). In LL, there was a positive correlation between age and the endogenous period () in the first 20 days (Spearman correlation, p < 0.05), with prolonged  held in two aged animals. In this condition, most adults showed free-running period of the circadian activity rhythm with  < 24 h for the first 30 days and later on relative coordination mediated by auditory cues. In study 2, the cross-correlation analysis between the activity profiles of the animals in LL with control animals kept under natural LD, found that there was less social synchronization in the aged. With the resubmission to the LD, the resynchronization rate was slower in the aged (t-test; p < 0.05) and in just one aged animal there was a loss of resynchronization capability. According to the data set, it is suggested that the aging in marmosets may be related to: 1) lower amplitude and greater fragmentation of the activity, accompanied to phase delay with extension of period, caused by changes in a photic input, in the generation and behavioral expression of the CAR; 2) lower capacity of the circadian activity rhythm to photic synchronization, that can become more robust in artificial lighting conditions, possibly due to the higher light intensities at the beginning of the active phase due to the abrupt transitions between the light and dark phases; and 3) smaller capacity of non-photic synchronization for auditory cues from conspecifics, possibly due to reducing sensory inputs and responsiveness of the circadian oscillators to auditory cues, what can make the aged marmoset most vulnerable, as these social cues may act as an important supporting factor for the photic synchronization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We frequently encounter conflicting emotion cues. This study examined how the neural response to emotional prosody differed in the presence of congruent and incongruent lexico-semantic cues. Two hypotheses were assessed: (i) decoding emotional prosody with conflicting lexico-semantic cues would activate brain regions associated with cognitive conflict (anterior cingulate and dorsolateral prefrontal cortex) or (ii) the increased attentional load of incongruent cues would modulate the activity of regions that decode emotional prosody (right lateral temporal cortex). While the participants indicated the emotion conveyed by prosody, functional magnetic resonance imaging data were acquired on a 3T scanner using blood oxygenation level-dependent contrast. Using SPM5, the response to congruent cues was contrasted with that to emotional prosody alone, as was the response to incongruent lexico-semantic cues (for the 'cognitive conflict' hypothesis). The right lateral temporal lobe region of interest analyses examined modulation of activity in this brain region between these two contrasts (for the 'prosody cortex' hypothesis). Dorsolateral prefrontal and anterior cingulate cortex activity was not observed, and neither was attentional modulation of activity in right lateral temporal cortex activity. However, decoding emotional prosody with incongruent lexico-semantic cues was strongly associated with left inferior frontal gyrus activity. This specialist form of conflict is therefore not processed by the brain using the same neural resources as non-affective cognitive conflict and neither can it be handled by associated sensory cortex alone. The recruitment of inferior frontal cortex may indicate increased semantic processing demands but other contributory functions of this region should be explored.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Listeners can attend to one of several simultaneous messages by tracking one speaker’s voice characteristics. Using differences in the location of sounds in a room, we ask how well cues arising from spatial position compete with these characteristics. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played simultaneously with a different “distracter” context. Talker difference was in competition with position difference, so the response indicates which cue‐type the listener was tracking. Spatial position was found to override talker difference in dichotic conditions when the talkers are similar (male). The salience of cues associated with differences in sounds, bearings decreased with distance between listener and sources. These cues are more effective binaurally. However, there appear to be other cues that increase in salience with distance between sounds. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. Distances between spectra calculated using a gammatone filterbank (with ERB‐spaced CFs) of the room’s impulse responses at different locations were computed, and comparison with listeners’ responses suggested some slight monaural loudness cues, but also monaural “timbre” cues arising from the temporal‐ and spectral‐envelope differences in the speech from different locations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Barn owls can localize a sound source using either the map of auditory space contained in the optic tectum or the auditory forebrain. The auditory thalamus, nucleus ovoidalis (N.Ov), is situated between these two auditory areas, and its inactivation precludes the use of the auditory forebrain for sound localization. We examined the sources of inputs to the N.Ov as well as their patterns of termination within the nucleus. We also examined the response of single neurons within the N.Ov to tonal stimuli and sound localization cues. Afferents to the N.Ov originated with a diffuse population of neurons located bilaterally within the lateral shell, core, and medial shell subdivisions of the central nucleus of the inferior colliculus. Additional afferent input originated from the ipsilateral ventral nucleus of the lateral lemniscus. No afferent input was provided to the N.Ov from the external nucleus of the inferior colliculus or the optic tectum. The N.Ov was tonotopically organized with high frequencies represented dorsally and low frequencies ventrally. Although neurons in the N.Ov responded to localization cues, there was no apparent topographic mapping of these cues within the nucleus, in contrast to the tectal pathway. However, nearly all possible types of binaural response to sound localization cues were represented. These findings suggest that in the thalamo-telencephalic auditory pathway, sound localization is subserved by a nontopographic representation of auditory space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asperger Syndrome (AS) belongs to autism spectrum disorders where both verbal and non-verbal communication difficulties are at the core of the impairment. Social communication requires a complex use of affective, linguistic-cognitive and perceptual processes. In the four studies included in the current thesis, some of the linguistic and perceptual factors that are important for face-to-face communication were studied using behavioural methods. In all four studies the results obtained from individuals with AS were compared with typically developed age, gender and IQ matched controls. First, the language skills of school-aged children were characterized in detail with standardized tests that measured different aspects of receptive and expressive language (Study I). The children with AS were found to be worse than the controls in following complex verbal instructions. Next, the visual perception of facial expressions of emotion with varying degrees of visual detail was examined (Study II). Adults with AS were found to have impaired recognition of facial expressions on the basis of very low spatial frequencies which are important for processing global information. Following that, multisensory perception was investigated by looking at audiovisual speech perception (Studies III and IV). Adults with AS were found to perceive audiovisual speech qualitatively differently from typically developed adults, although both groups were equally accurate in recognizing auditory and visual speech presented alone. Finally, the effect of attention on audiovisual speech perception was studied by registering eye gaze behaviour (Study III) and by studying the voluntary control of visual attention (Study IV). The groups did not differ in eye gaze behaviour or in the voluntary control of visual attention. The results of the study series demonstrate that many factors underpinning face-to-face social communication are atypical in AS. In contrast with previous assumptions about intact language abilities, the current results show that children with AS have difficulties in understanding complex verbal instructions. Furthermore, the study makes clear that deviations in the perception of global features in faces expressing emotions as well as in the multisensory perception of speech are likely to harm face-to-face social communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation examines auditory perception and audio-visual reception in noise for both hearing-impaired and normal hearing persons, with a goal of determining some of the noise conditions under which amplified acoustic cues for speech can be beneficial to hearing-impaired persons.