903 resultados para audio-visual methods
Resumo:
Mode of access: Internet.
Resumo:
Description based on: 1985.
Resumo:
Mode of access: Internet.
Resumo:
"February 1971."
Resumo:
Mode of access: Internet.
Resumo:
Cover title.
Resumo:
In this report we summarize the state-of-the-art of speech emotion recognition from the signal processing point of view. On the bases of multi-corporal experiments with machine-learning classifiers, the observation is made that existing approaches for supervised machine learning lead to database dependent classifiers which can not be applied for multi-language speech emotion recognition without additional training because they discriminate the emotion classes following the used training language. As there are experimental results showing that Humans can perform language independent categorisation, we made a parallel between machine recognition and the cognitive process and tried to discover the sources of these divergent results. The analysis suggests that the main difference is that the speech perception allows extraction of language independent features although language dependent features are incorporated in all levels of the speech signal and play as a strong discriminative function in human perception. Based on several results in related domains, we have suggested that in addition, the cognitive process of emotion-recognition is based on categorisation, assisted by some hierarchical structure of the emotional categories, existing in the cognitive space of all humans. We propose a strategy for developing language independent machine emotion recognition, related to the identification of language independent speech features and the use of additional information from visual (expression) features.
Resumo:
In this paper we discuss how an innovative audio-visual project was adopted to foster active, rather than declarative learning, in critical International Relations (IR). First, we explore the aesthetic turn in IR, to contrast this with forms of representation that have dominated IR scholarship. Second, we describe how students were asked to record short audio or video projects to explore their own insights through aesthetic and non-written formats. Third, we explain how these projects are understood to be deeply embedded in social science methodologies. We cite our inspiration from applying a personal sociological imagination, as a way to counterbalance a ‘marketised’ slant in higher education, in a global economy where students are often encouraged to consume, rather than produce knowledge. Finally, we draw conclusions in terms of deeper forms of student engagement leading to new ways of thinking and presenting new skills and new connections between theory and practice.
Resumo:
No existe en Cuenca un proyecto de investigación periodística y de producción audiovisual que indague, recopile y presente información sobre aquellas profesiones tradicionales heredadas a través del tiempo y que poco a poco se van perdiendo con miras a extinguirse completamente. Este proyecto, de cierta manera, puede ser innovador, ya que involucra dos áreas: comunicación audiovisual y redacción dentro del periodismo. Se involucran por el hecho de presentar información relevante, a través de un producto final, visual y escrito, que enseñe de quéforma estas profesiones son desarrolladas por diferentes actores humanos, sus contextos y sus procesos, con la intención de servir de apoyo investigativo cultural en el ámbito local y nacional.
Resumo:
People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.
Resumo:
369 p.
Resumo:
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Resumo:
TEMA: programa de remediação auditivo-visual computadorizado em escolares com dislexia do desenvolvimento. OBJETIVOS: verificar a eficácia de um programa de remediação auditivo-visual computadorizado em escolares com dislexia do desenvolvimento. Dentre os objetivos específicos, o estudo teve como finalidade comparar o desempenho cognitivo-lingüístico de escolares com dislexia do desenvolvimento com escolares bons leitores; comparar os achados dos procedimentos de avaliação de pré e pós testagem em escolares com dislexia submetidos e não submetidos ao programa; e, por fim, comparar os achados do programa de remediação em escolares com dislexia e escolares bons leitores submetidos ao programa de remediação. MÉTODO: participaram deste estudo 20 escolares, sendo o grupo I (GI) subdivido em: GIe, composto de cinco escolares com dislexia do desenvolvimento submetidos ao programa, e GIc, composto de cinco escolares com dislexia do desenvolvimento não submetidos ao programa. O grupo II (GII) foi subdividido em GIIe, composto de cinco escolares bons leitores submetidos à remediação, e GIIc, composto de cinco escolares bons leitores não submetidos à remediação. Foi realizado o programa de remediação auditivo-visual computadorizado Play-on. RESULTADOS: os resultados deste estudo revelaram que o GI apresentou desempenho inferior em habilidade de processamento auditivo e de consciência fonológica em comparação com o GII em situação de pré-testagem. Entretanto, o GIe apresentou desempenho semelhante ao GII em situação de pós-testagem, evidenciando a eficácia da remediação auditivo-visual em escolares com dislexia do desenvolvimento. CONCLUSÃO: o estudo evidenciou a eficácia do programa de remediação auditivo-visual em escolares com dislexia do desenvolvimento.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements