946 resultados para Audio-visual materials


Relevância:

90.00% 90.00%

Publicador:

Resumo:

TEMA: programa de remediação auditivo-visual computadorizado em escolares com dislexia do desenvolvimento. OBJETIVOS: verificar a eficácia de um programa de remediação auditivo-visual computadorizado em escolares com dislexia do desenvolvimento. Dentre os objetivos específicos, o estudo teve como finalidade comparar o desempenho cognitivo-lingüístico de escolares com dislexia do desenvolvimento com escolares bons leitores; comparar os achados dos procedimentos de avaliação de pré e pós testagem em escolares com dislexia submetidos e não submetidos ao programa; e, por fim, comparar os achados do programa de remediação em escolares com dislexia e escolares bons leitores submetidos ao programa de remediação. MÉTODO: participaram deste estudo 20 escolares, sendo o grupo I (GI) subdivido em: GIe, composto de cinco escolares com dislexia do desenvolvimento submetidos ao programa, e GIc, composto de cinco escolares com dislexia do desenvolvimento não submetidos ao programa. O grupo II (GII) foi subdividido em GIIe, composto de cinco escolares bons leitores submetidos à remediação, e GIIc, composto de cinco escolares bons leitores não submetidos à remediação. Foi realizado o programa de remediação auditivo-visual computadorizado Play-on. RESULTADOS: os resultados deste estudo revelaram que o GI apresentou desempenho inferior em habilidade de processamento auditivo e de consciência fonológica em comparação com o GII em situação de pré-testagem. Entretanto, o GIe apresentou desempenho semelhante ao GII em situação de pós-testagem, evidenciando a eficácia da remediação auditivo-visual em escolares com dislexia do desenvolvimento. CONCLUSÃO: o estudo evidenciou a eficácia do programa de remediação auditivo-visual em escolares com dislexia do desenvolvimento.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article is inserted in a study aimed at the identification of the main barriers for the inclusion of visually-impaired students in Physics classes. It focuses on the understanding of the communication context which facilitates or hardens the effective participation of students with visual impairment in Mechanics activities. To do so, the research defines, from empirical - sensory and semantic structures, the language to be applied in the activities, as well as, the moment and the speech pattern in which the languages have been used. As a result, it identifies the rela tion between the uses of the interdependent audio-visual empirical lan guage structure in the non-interactive episodes of authority; the decrease in the use of this structure in interactive episodes; the creation of educa tional segregation environments within the classroom and the frequent use of the interdependent tactile-hearing empirical language structure in such environments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article represents a continuation of the results of a research presented in Camargo and Nardi (2007). It is inserted in the study that seeks to understand the main student’s inclusion barriers with visual impairment in the Physics classes. It aims to understand which communication context shows kindness or unkindness to the impairment visual student’s real participation in thermology activities. For this, the research defines, from the empirical - sensory and semantics structures, the used languages in the activities, as well, the moment and the speech pattern in which the languages have been used. As result, identifies a strong relation between the uses of the interdependent empirical structure audio-visual language in the non-interactive episodes of authority; a decrease of this structure use in the interactive episodes and the creation of education segregation environments within the classroom.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article is inserted in a wider study that seeks to understand the main inclusion barriers in Physics classes for students with visual impairment It aims to understand which communication context favors or impedes the visually impaired student participation to the impairment visual student’s real participation in Modern Physics activities. The research defines, from the empirical-sensory and semantics structures, the languages used in the activities, as well as, the moment and the speech pattern in which those languages have been used. As a result, this study identifies a strong relation between the uses of the interdependent empirical structure audio-visual language in the non-interactive episodes of authority; a decrease of this structure use in the interactive episodes; the creation of education segregation environments within the clasroom and the frequent use of empirical tactile-hearing interdependent language structure in these environments. Moreover, the concept of «special educational need» is discussed and its inadequate use is analyzed. Suggestions are given for its correct use of «special educational need,» its inadequate use, giving suggestions for its correct use.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The wide use of e-technologies represents a great opportunity for underserved segments of the population, especially with the aim of reintegrating excluded individuals back into society through education. This is particularly true for people with different types of disabilities who may have difficulties while attending traditional on-site learning programs that are typically based on printed learning resources. The creation and provision of accessible e-learning contents may therefore become a key factor in enabling people with different access needs to enjoy quality learning experiences and services. Another e-learning challenge is represented by m-learning (which stands for mobile learning), which is emerging as a consequence of mobile terminals diffusion and provides the opportunity to browse didactical materials everywhere, outside places that are traditionally devoted to education. Both such situations share the need to access materials in limited conditions and collide with the growing use of rich media in didactical contents, which are designed to be enjoyed without any restriction. Nowadays, Web-based teaching makes great use of multimedia technologies, ranging from Flash animations to prerecorded video-lectures. Rich media in e-learning can offer significant potential in enhancing the learning environment, through helping to increase access to education, enhance the learning experience and support multiple learning styles. Moreover, they can often be used to improve the structure of Web-based courses. These highly variegated and structured contents may significantly improve the quality and the effectiveness of educational activities for learners. For example, rich media contents allow us to describe complex concepts and process flows. Audio and video elements may be utilized to add a “human touch” to distance-learning courses. Finally, real lectures may be recorded and distributed to integrate or enrich on line materials. A confirmation of the advantages of these approaches can be seen in the exponential growth of video-lecture availability on the net, due to the ease of recording and delivering activities which take place in a traditional classroom. Furthermore, the wide use of assistive technologies for learners with disabilities injects new life into e-learning systems. E-learning allows distance and flexible educational activities, thus helping disabled learners to access resources which would otherwise present significant barriers for them. For instance, students with visual impairments have difficulties in reading traditional visual materials, deaf learners have trouble in following traditional (spoken) lectures, people with motion disabilities have problems in attending on-site programs. As already mentioned, the use of wireless technologies and pervasive computing may really enhance the educational learner experience by offering mobile e-learning services that can be accessed by handheld devices. This new paradigm of educational content distribution maximizes the benefits for learners since it enables users to overcome constraints imposed by the surrounding environment. While certainly helpful for users without disabilities, we believe that the use of newmobile technologies may also become a fundamental tool for impaired learners, since it frees them from sitting in front of a PC. In this way, educational activities can be enjoyed by all the users, without hindrance, thus increasing the social inclusion of non-typical learners. While the provision of fully accessible and portable video-lectures may be extremely useful for students, it is widely recognized that structuring and managing rich media contents for mobile learning services are complex and expensive tasks. Indeed, major difficulties originate from the basic need to provide a textual equivalent for each media resource composing a rich media Learning Object (LO). Moreover, tests need to be carried out to establish whether a given LO is fully accessible to all kinds of learners. Unfortunately, both these tasks are truly time-consuming processes, depending on the type of contents the teacher is writing and on the authoring tool he/she is using. Due to these difficulties, online LOs are often distributed as partially accessible or totally inaccessible content. Bearing this in mind, this thesis aims to discuss the key issues of a system we have developed to deliver accessible, customized or nomadic learning experiences to learners with different access needs and skills. To reduce the risk of excluding users with particular access capabilities, our system exploits Learning Objects (LOs) which are dynamically adapted and transcoded based on the specific needs of non-typical users and on the barriers that they can encounter in the environment. The basic idea is to dynamically adapt contents, by selecting them from a set of media resources packaged in SCORM-compliant LOs and stored in a self-adapting format. The system schedules and orchestrates a set of transcoding processes based on specific learner needs, so as to produce a customized LO that can be fully enjoyed by any (impaired or mobile) student.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Las teorías cognitivas han demostrado que el pensamiento humano se encuentra corporeizado; es decir, que accedemos a la realidad mediante nuestros sentidos y no podemos huir de ellos. Para entender y manejar conceptos abstractos utilizamos proyecciones metafóricas basadas en sensaciones corporales. De ahí la ubicuidad de la metáfora en el lenguaje cotidiano. Aunque esta afirmación ha sido ampliamente probada con el análisis del corpus verbal en distintas lenguas, apenas existen investigaciones en el corpus audiovisual. Si las metáforas primarias forman parte de nuestro inconsciente cognitivo, son inherentes al ser humano y consecuencia de la naturaleza del cerebro, deben generar también metáforas visuales. En este artículo, se analizan y discuten una serie de ejemplos para comprobarlo.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bibliography: p. 41.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Notre mémoire prend en charge de re-conceptualiser notre nouvel environnement audio-visuel et l’expérience que nous en faisons. À l’ère du numérique et de la dissémination généralisée des images animées, nous circonscrivons une catégorie d’images que nous concevons comme la plus à même d’avoir un impact sur le développement humain. Nous les appelons des images-sons synchrono-photo-temporalisées. Plus spécifiquement, nous cherchons à mettre en lumière leur puissance d’affection et de contrôle en démontrant qu’elles ont une influence certaine sur le processus d’individuation, influence qui est grandement facilitée par l’isotopie structurelle qui existe entre le flux de conscience et leur flux d’écoulement. Par le biais des recherches de Bernard Stiegler, nous remarquons également l’important rôle que jouent l’attention et la mémoire dans le processus d’individuation. L’ensemble de notre réflexion nous fait réaliser à quel point le système d’éducation actuel québécois manque à sa tâche de formation citoyenne en ne dispensant pas un enseignement adéquat des images animées.