986 resultados para Auditory-visual Interaction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Psicologia do Desenvolvimento e Aprendizagem - FC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work was to verify the effect of teaching the echoic behavior over the pictures naming in four children between eight and nine years old with prelingual hearing impaired, users of cochlear implants. The design adopted was: (a) pre-training that taught the matching-to-sample task; (b) pre-tests that selected three words to teach; (c) teaching of auditory-visual conditional relations; (d) naming pos-test; (e) the teaching of echoic with orofacial clues and, (f) the second naming pos-test. In the pre-test all participants achieved smaller percentage of correct on naming (60%-80%) and echoic (20%-50%) when compared to percentages word recognition (86%-93%). All participants learned the auditory-visual relations. The improvement on naming test occurred after auditory training select based for two participants; for other two participants the improvement on naming test occurred just after the training of echoic. Analysis of data showed that the listening and speaking performances are independent in their establishment and require specific conditions of teaching; in the case of this study, even though the result is not generalized to all participants, the highest correspondence into point to point naming was obtained following the teaching of echoic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thinking the school as an institution of equal access for every type of kid, youthful, and adult, to education, It was thought for this assignment to focus in inclusive education in defense of the right of all students to be together, learning and participating without any kind of discrimination. Knowing the large scope of the theme Inclusive Education , subdivided by MEC in four types of disabilities, as follows: auditory, visual, motor and intellectual. It was decided to approach here; intellectual disability, to be a disability that covers a vast number of limitations and that is largely present in the school environment. This work will sought to better understand this deficiency and the work with students carrying it into the classroom

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Impairment of cognitive performance during and after high-altitude climbing has been described in numerous studies and has mostly been attributed to cerebral hypoxia and resulting functional and structural cerebral alterations. To investigate the hypothesis that high-altitude climbing leads to cognitive impairment, we used of neuropsychological tests and measurements of eye movement (EM) performance during different stimulus conditions. The study was conducted in 32 mountaineers participating in an expedition to Muztagh Ata (7,546 m). Neuropsychological tests comprised figural fluency, line bisection, letter and number cancellation, and a modified pegboard task. Saccadic performance was evaluated under three stimulus conditions with varying degrees of cortical involvement: visually guided pro- and anti-saccades, and visuo-visual interaction. Typical saccade parameters (latency, mean sequence, post-saccadic stability, and error rate) were computed off-line. Measurements were taken at a baseline level of 440 m and at altitudes of 4,497, 5,533, 6,265, and again at 440 m. All subjects reached 5,533 m, and 28 reached 6,265 m. The neuropsychological test results did not reveal any cognitive impairment. Complete eye movement recordings for all stimulus conditions were obtained in 24 subjects at baseline and at least two altitudes and in 10 subjects at baseline and all altitudes. Measurements of saccade performances showed no dependence on any altitude-related parameter and were well within normal limits. Our data indicates that acclimatized climbers do not seem to suffer from significant cognitive deficits during or after climbs to altitudes above 7,500 m. We demonstrated that investigation of EMs is feasible during high-altitude expeditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: There is convincing evidence that phonological, orthographic and semantic processes influence children’s ability to learn to read and spell words. So far only a few studies investigated the influence of implicit learning in literacy skills. Children are sensitive to the statistics of their learning environment. By frequent reading they acquire implicit knowledge about the frequency of letter patterns in written words, and they use this knowledge during reading and spelling. Additionally, semantic connections facilitate to storing of words in memory. Thus, the aim of the intervention study was to implement a word-picture training which is based on statistical and semantic learning. Furthermore, we aimed at examining the training effects in reading and spelling in comparison to an auditory-visual matching training and a working memory training program. Participants and Methods: One hundred and thirty-two children aged between 8 and 11 years participated in training in three weekly session of 12 minutes over 8 weeks, and completed other assessments of reading, spelling, working memory and intelligence before and after training. Results: Results revealed in general that the word-picture training and the auditory-visual matching training led to substantial gains in reading and spelling performance in comparison to the working-memory training. Although both children with and without learning difficulties profited in their reading and spelling after the word-picture training, the training program led to differential effects for the two groups. After the word-picture training on the one hand, children with learning difficulties profited more in spelling as children without learning difficulties, on the other hand, children without learning difficulties benefit more in word comprehension. Conclusions: These findings highlight the need for frequent reading trainings with semantic connections in order to support the acquisition of literacy skills.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our last study with regularly developed children demonstrated a positive effect of working memory training on cognitive abilities. Building upon these findings, the aim of this multidisciplinary study is to investigate the effects of training of core functions with children who are suffering from different learning disabilities, like AD/HD, developmental dyslexia or specific language impairment. In addition to working memory training (BrainTwister), we apply a perceptual training, which concentrates on auditory-visual matching (Audilex), as well as an implicit concept learning task. We expect differential improvements of mental capacities, specifically of executive functions (working memory, attention, auditory and visual processing), scholastic abilities (language and mathematical skills), as well as of problem solving. With that, we hope to find further directions regarding helpful and individually adapted interventions in educational settings. Interested parties are invited to discuss and comment the design, the research question, and the possibilities in recruiting the subjects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Brn-3 subfamily of POU domain genes are expressed in sensory neurons and in select brainstem nuclei. Earlier work has shown that targeted deletion of the Brn-3b and Brn-3c genes produce, respectively, defects in the retina and in the inner ear. We show herein that targeted deletion of the Brn-3a gene results in defective suckling and in uncoordinated limb and trunk movements, leading to early postnatal death. Brn-3a (-/-) mice show a loss of neurons in the trigeminal ganglia, the medial habenula, the red nucleus, and the caudal region of the inferior olivary nucleus but not in the retina and dorsal root ganglia. In the trigeminal and dorsal root ganglia, but not in the retina, there is a marked decrease in the frequency of neurons expressing Brn-3b and Brn-3c, suggesting that Brn-3a positively regulates Brn-3b and Brn-3c expression in somatosensory neurons. Thus, Brn-3a exerts its major developmental effects in somatosensory neurons and in brainstem nuclei involved in motor control. The pheno-types of Brn-3a, Brn-3b, and Brn-3c mutant mice indicate that individual Brn-3 genes have evolved to control development in the auditory, visual, or somatosensory systems and that despite differences between these systems in transduction mechanisms, sensory organ structures, and central information processing, there may be fundamental homologies in the genetic regulatory events that control their development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.