941 resultados para Research Audio-visual aids


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Estudos Linguísticos - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article we present and discuss a number of beliefs about listening comprehension among students who were in their last year of a teachers’ certification Letters course in Italian and Portuguese. The data were collected by means of questionnaires answered by all the participating students, logs and interviews conducted with five of the participating students, classroom observation, lessons recorded on audio and video, and diaries. The study is supported by a rationale on beliefs and on definitions of listening comprehension as a skill. The analysis indicate that a number of students showed lack of motivation and low expectations about developing oral comprehension skills due to several drawbacks in foreign language learning during their teacher education course. However, other beliefs that emerged from the data, such as the importance of visual aids to help understanding an oral text, seem to have a positive effect towards a satisfactory proficiency level in oral comprehension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was designed to present and discuss some results produced by a research involving the use of English subtitles of some news videos from the webiste Reuters.com (http://www.reuters.com) with pedagogical reasons in a Brazilian context (Academic English for Journalism). We have developed the research during two semesters at UNESP (Universidade Estadual Paulista Júlio de Mesquita Filho). The professor in charge of the study has chosen the students of Journalism as the audience to whom the videos were presented. The assumptions of many theorists and experts in Audiovisual Translation were adopted as our Theoretical Sources. The first step of the study was the assessment of the syllabus of each course. This was very helpful as a guidance in order to choose the most relevant and interesting videos for students. After the evaluation of academic and professional interests, we chose some videos to insert appropriate subtitles, according to some strategies suggested by Panayota Georgakopoulou and Henrik Gottlieb. Finally we presented the videos during the English classes. At the first time, they were presented without subtitles just to notice the comprehension level of the students. After that, the videos were presented with English subtitles. As we first assumed, the students haven’t had the whole comprehension of specific details during the first presentation, they have just used their previous knowledge and the visual aids to help them in a superficial understanding of the news. As the subtitles appear, the process of communication was finally accomplished.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acoustic conditions in hospitals have been shown to influence a patient’s physical and psychological health. Noise levels in an Omaha, Nebraska, hospital were measured and compared between various times: before, during, and after renovations of a hospital wing. The renovations included cosmetic changes and the installation of new in-room patient audio-visual systems. Sound pressure levels were logged every 10-seconds over a four-day period in three different locations: at the nurses' station, in the hallway, and in a nearby patient’s room. The resulting data were analyzed in terms of the hourly A-weighted equivalent sound pressure levels (

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research into visual hallucinations has accelerated over the last decade from around 350 publications per year in 2000 to over 500 in 2010. Increased recognition of the frequent occurrence of visual hallucinations in a number of common disorders, coupled with improvements in the measurement of phenomenology, and more sophisticated imaging techniques have allowed the development and initial testing of sophisticated models. However, key questions remain unanswered. Amongst these are: whether there is a satisfactory definition of hallucinations in a constructive visual system; whether there are one, two or several core varieties of hallucinations; what are the underlying brain mechanisms for hallucinations; and what, if anything, can be done to treat them when they lead to distress? Looking across research in several clinical areas suggests a tentative integrative model that allows the possibility of answering these questions, but much work remains to be done.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much of the research on visual hallucinations (VHs) has been conducted in the context of eye disease and neurodegenerative conditions, but little is known about these phenomena in psychiatric and nonclinical populations. The purpose of this article is to bring together current knowledge regarding VHs in the psychosis phenotype and contrast this data with the literature drawn from neurodegenerative disorders and eye disease. The evidence challenges the traditional views that VHs are atypical or uncommon in psychosis. The weighted mean for VHs is 27% in schizophrenia, 15% in affective psychosis, and 7.3% in the general community. VHs are linked to a more severe psychopathological profile and less favorable outcome in psychosis and neurodegenerative conditions. VHs typically co-occur with auditory hallucinations, suggesting a common etiological cause. VHs in psychosis are also remarkably complex, negative in content, and are interpreted to have personal relevance. The cognitive mechanisms of VHs in psychosis have rarely been investigated, but existing studies point to source-monitoring deficits and distortions in top-down mechanisms, although evidence for visual processing deficits, which feature strongly in the organic literature, is lacking. Brain imaging studies point to the activation of visual cortex during hallucinations on a background of structural and connectivity changes within wider brain networks. The relationship between VHs in psychosis, eye disease, and neurodegeneration remains unclear, although the pattern of similarities and differences described in this review suggests that comparative studies may have potentially important clinical and theoretical implications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone’s video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW