2 resultados para Multimodal Interaction
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This research project is based on the Multimodal Corpus of Chinese Court Interpreting (MUCCCI [mutʃɪ]), a small-scale multimodal corpus on the basis of eight authentic court hearings with Chinese-English interpreting in Mainland China. The corpus has approximately 92,500 word tokens in total. Besides the transcription of linguistic and para-linguistic features, utilizing the facial expression classification rules suggested by Black and Yacoob (1995), MUCCCI also includes approximately 1,200 annotations of facial expressions linked to the six basic types of human emotions, namely, anger, disgust, happiness, surprise, sadness, and fear (Black & Yacoob, 1995). This thesis is an example of conducting qualitative analysis on interpreter-mediated courtroom interactions through a multimodal corpus. In particular, miscommunication events (MEs) and the reasons behind them were investigated in detail. During the analysis, although queries were conducted based on non-verbal annotations when searching for MEs, both verbal and non-verbal features were considered indispensable parts contributing to the entire context. This thesis also includes a detailed description of the compilation process of MUCCCI utilizing ELAN, from data collection to transcription, POS tagging and non-verbal annotation. The research aims at assessing the possibility and feasibility of conducting qualitative analysis through a multimodal corpus of court interpreting. The concept of integrating both verbal and non-verbal features to contribute to the entire context is emphasized. The qualitative analysis focusing on MEs can provide an inspiration for improving court interpreters’ performances. All the constraints and difficulties presented can be regarded as a reference for similar research in the future.
Resumo:
In sport climbing, athletes with vision impairments are constantly accompanied by their guides – usually trainers – both during the preparatory inspection of the routes and whilst climbing. Trainers are, so to speak, the climbers’ eyes, in the sense that they systematically put their vision in the service of the climbers’ mobility and sporting performance. The synergy between trainers and athletes is based on peculiar, strictly multimodal interactive practices that are focused on the body and on its constantly evolving sensory engagement with the materiality of routes. In this context, sensory perception and embodied actions required to plan and execute the climb are configured as genuinely interactive accomplishments. Drawing on the theoretical framework of Embodied and Situated Cognition and on the methodology of Conversation Analysis, this thesis engages in the multimodal analysis of trainer-athlete interactions in paraclimbing. The analysis is based on a corpus of video recorded climbing sessions. The major findings of the study can be summarized as follows. 1) Intercorporeality is key to interactions between trainers and athletes with visual impairments. The participants orient to perceiving the climbing space and acting in it as a ‘We’. 2) The grammar, lexicon, prosody, and timing of the trainers’ instructions are finely tuned to the ongoing corporeal experience of the climbers. 3) Climbers with visual impairments build their actions by using sensory resources that are provided by their trainers. This result is of particular importance as it shows that resources and constraints for action are in a fundamental way constituted in interaction with Others and with specific socio-material ecologies, rather than being defined a priori by the organs and functions of individuals’ body and mind. Individual capabilities are thus enhanced and extended in interaction, which encourages a more ecological view of (dis)ability.