947 resultados para Audio-visual Integration
Resumo:
Mode of access: Internet.
Resumo:
A replacement for, rather than an addition to, the bibliographies of the former National Organization for Public Health Nursing, the National League of Nursing Education, 1952, and the National League for Nursing, 1954-55.
Resumo:
Vol. 5 issued by the National League for Nursing, Division of Nursing Education.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
Description based on: 1985.
Resumo:
Mode of access: Internet.
Resumo:
"February 1971."
Resumo:
Mode of access: Internet.
Resumo:
Cover title.
Resumo:
In this report we summarize the state-of-the-art of speech emotion recognition from the signal processing point of view. On the bases of multi-corporal experiments with machine-learning classifiers, the observation is made that existing approaches for supervised machine learning lead to database dependent classifiers which can not be applied for multi-language speech emotion recognition without additional training because they discriminate the emotion classes following the used training language. As there are experimental results showing that Humans can perform language independent categorisation, we made a parallel between machine recognition and the cognitive process and tried to discover the sources of these divergent results. The analysis suggests that the main difference is that the speech perception allows extraction of language independent features although language dependent features are incorporated in all levels of the speech signal and play as a strong discriminative function in human perception. Based on several results in related domains, we have suggested that in addition, the cognitive process of emotion-recognition is based on categorisation, assisted by some hierarchical structure of the emotional categories, existing in the cognitive space of all humans. We propose a strategy for developing language independent machine emotion recognition, related to the identification of language independent speech features and the use of additional information from visual (expression) features.
Resumo:
In this paper we discuss how an innovative audio-visual project was adopted to foster active, rather than declarative learning, in critical International Relations (IR). First, we explore the aesthetic turn in IR, to contrast this with forms of representation that have dominated IR scholarship. Second, we describe how students were asked to record short audio or video projects to explore their own insights through aesthetic and non-written formats. Third, we explain how these projects are understood to be deeply embedded in social science methodologies. We cite our inspiration from applying a personal sociological imagination, as a way to counterbalance a ‘marketised’ slant in higher education, in a global economy where students are often encouraged to consume, rather than produce knowledge. Finally, we draw conclusions in terms of deeper forms of student engagement leading to new ways of thinking and presenting new skills and new connections between theory and practice.
Resumo:
No existe en Cuenca un proyecto de investigación periodística y de producción audiovisual que indague, recopile y presente información sobre aquellas profesiones tradicionales heredadas a través del tiempo y que poco a poco se van perdiendo con miras a extinguirse completamente. Este proyecto, de cierta manera, puede ser innovador, ya que involucra dos áreas: comunicación audiovisual y redacción dentro del periodismo. Se involucran por el hecho de presentar información relevante, a través de un producto final, visual y escrito, que enseñe de quéforma estas profesiones son desarrolladas por diferentes actores humanos, sus contextos y sus procesos, con la intención de servir de apoyo investigativo cultural en el ámbito local y nacional.
Resumo:
Human brain is provided with a flexible audio-visual system, which interprets and guides responses to external events according to spatial alignment, temporal synchronization and effectiveness of unimodal signals. The aim of the present thesis was to explore the possibility that such a system might represent the neural correlate of sensory compensation after a damage to one sensory pathway. To this purpose, three experimental studies have been conducted, which addressed the immediate, short-term and long-term effects of audio-visual integration on patients with Visual Field Defect (VFD). Experiment 1 investigated whether the integration of stimuli from different modalities (cross-modal) and from the same modality (within-modal) have a different, immediate effect on localization behaviour. Patients had to localize modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (visual-auditory) and within-modal stimulus pairs (visual-visual). Results showed that cross-modal stimuli evoked a greater improvement than within modal stimuli, consistent with a Bayesian explanation. Moreover, even when visual processing was impaired, cross-modal stimuli improved performance in an optimal fashion. These findings support the hypothesis that the improvement derived from multisensory integration is not attributable to simple target redundancy, and prove that optimal integration of cross-modal signals occurs in processing stage which are not consciously accessible. Experiment 2 examined the possibility to induce a short term improvement of localization performance without an explicit knowledge of visual stimulus. Patients with VFD and patients with neglect had to localize weak sounds before and after a brief exposure to a passive cross-modal stimulation, which comprised spatially disparate or spatially coincident audio-visual stimuli. After exposure to spatially disparate stimuli in the affected field, only patients with neglect exhibited a shifts of auditory localization toward the visual attractor (the so called Ventriloquism After-Effect). In contrast, after adaptation to spatially coincident stimuli, both neglect and hemianopic patients exhibited a significant improvement of auditory localization, proving the occurrence of After Effect for multisensory enhancement. These results suggest the presence of two distinct recalibration mechanisms, each mediated by a different neural route: a geniculo-striate circuit and a colliculus-extrastriate circuit respectively. Finally, Experiment 3 verified whether a systematic audio-visual stimulation could exert a long-lasting effect on patients’ oculomotor behaviour. Eye movements responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of twelve patients with VFD and twelve controls subjects. Results showed that prior to treatment, patients’ performance was significantly different from that of controls in relation to fixations and saccade parameters; after audiovisual training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in left and right hemisphere–damaged patients. The present findings provide evidence that a systematic audio-visual stimulation may encourage a more organized pattern of visual exploration with long lasting effects. In conclusion, results from these studies clearly demonstrate that the beneficial effects of audio-visual integration can be retained in absence of explicit processing of visual stimulus. Surprisingly, an improvement of spatial orienting can be obtained not only when a on-line response is required, but also after either a brief or a long adaptation to audio-visual stimulus pairs, so suggesting the maintenance of mechanisms subserving cross-modal perceptual learning after a damage to geniculo-striate pathway. The colliculus-extrastriate pathway, which is spared in patients with VFD, seems to play a pivotal role in this sensory compensation.