5 resultados para multimodal biometrics
em Universidad de Alicante
Resumo:
In this paper, a multimodal and interactive prototype to perform music genre classification is presented. The system is oriented to multi-part files in symbolic format but it can be adapted using a transcription system to transform audio content in music scores. This prototype uses different sources of information to give a possible answer to the user. It has been developed to allow a human expert to interact with the system to improve its results. In its current implementation, it offers a limited range of interaction and multimodality. Further development aimed at full interactivity and multimodal interactions is discussed.
Resumo:
International conference presentations represent one of the biggest challenges for academics using English as a Lingua Franca (ELF). This paper aims to initiate exploration into the multimodal academic discourse of oral presentations, including the verbal, written, non-verbal material (NVM) and body language modes. It offers a Systemic Functional Linguistic (SFL) and multimodal framework of presentations to enhance mixed-disciplinary ELF academics' awareness of what needs to be taken into account to communicate effectively at conferences. The model is also used to establish evaluation criteria for the presenters' talks and to carry out a multimodal discourse analysis of four well-rated 20-min talks, two from the technical sciences and two from the social sciences in a workshop scenario. The findings from the analysis and interviews indicate that: (a) a greater awareness of the mode affordances and their combinations can lead to improved performances; (b) higher reliance on the visual modes can compensate for verbal deficiencies; and (c) effective speakers tend to use a variety of modes that often overlap but work together to convey specific meanings. However, firm conclusions cannot be drawn on the basis of workshop presentations, and further studies on the multimodal analysis of ‘real conferences’ within specific disciplines are encouraged.
Resumo:
This thesis explores the role of multimodality in language learners’ comprehension, and more specifically, the effects on students’ audio-visual comprehension when different orchestrations of modes appear in the visualization of vodcasts. Firstly, I describe the state of the art of its three main areas of concern, namely the evolution of meaning-making, Information and Communication Technology (ICT), and audio-visual comprehension. One of the most important contributions in the theoretical overview is the suggested integrative model of audio-visual comprehension, which attempts to explain how students process information received from different inputs. Secondly, I present a study based on the following research questions: ‘Which modes are orchestrated throughout the vodcasts?’, ‘Are there any multimodal ensembles that are more beneficial for students’ audio-visual comprehension?’, and ‘What are the students’ attitudes towards audio-visual (e.g., vodcasts) compared to traditional audio (e.g., audio tracks) comprehension activities?’. Along with these research questions, I have formulated two hypotheses: Audio-visual comprehension improves when there is a greater number of orchestrated modes, and students have a more positive attitude towards vodcasts than traditional audios when carrying out comprehension activities. The study includes a multimodal discourse analysis, audio-visual comprehension tests, and students’ questionnaires. The multimodal discourse analysis of two British Council’s language learning vodcasts, entitled English is GREAT and Camden Fashion, using ELAN as the multimodal annotation tool, shows that there are a variety of multimodal ensembles of two, three and four modes. The audio-visual comprehension tests were given to 40 Spanish students, learning English as a foreign language, after the visualization of vodcasts. These comprehension tests contain questions related to specific orchestrations of modes appearing in the vodcasts. The statistical analysis of the test results, using repeated-measures ANOVA, reveal that students obtain better audio-visual comprehension results when the multimodal ensembles are constituted by a greater number of orchestrated modes. Finally, the data compiled from the questionnaires, conclude that students have a more positive attitude towards vodcasts in comparison to traditional audio listenings. Results from the audio-visual comprehension tests and questionnaires prove the two hypotheses of this study.
Resumo:
This article analyses the way in which the subject English Language V of the degree English Studies (English Language and Literature) combines the development of the five skills (listening, speaking, reading, writing and interacting) with the use of multimodal activities and resources in the teaching-learning process so that students increase their motivation and acquire different social competences that will be useful for the labour market such as communication, cooperation, leadership or conflict management. This study highlights the use of multimodal materials (texts, videos, etc.) on social topics to introduce cultural aspects in a language subject and to deepen into the different social competences university students can acquire when they work with them. The study was guided by the following research questions: how can multimodal texts and resources contribute to the development of the five skills in a foreign language classroom? What are the main social competences that students acquire when the teaching-learning process is multimodal? The results of a survey prepared at the end of the academic year 2015-2016 point out the main competences that university students develop thanks to multimodal teaching. For its framework of analysis, the study draws on the main principles of visual grammar (Kress & van Leeuwen, 2006) where students learn how to analyse the main aspects in multimodal texts. The analysis of the different multimodal activities described in the article and the survey reveal that multimodality is useful for developing critical thinking, for bringing cultural aspects into the classroom and for working on social competences. This article will explain the successes and challenges of using multimodal texts with social content so that students can acquire social competences while learning content. Moreover, the implications of using multimodal resources in a language classroom to develop multiliteracies will be observed.
Resumo:
The aim of this research paper is to analyse the key political posters made for the campaigns of Irish political party Fianna Fáil framed in the Celtic Tiger (1997-2008) and post-Celtic Tiger years (2009-2012). I will then focus on the four posters of the candidate in the elections that took place in 1997, 2002, 2007 and 2011 with the intention of observing first how the leader is represented, and later on pinpointing the similarities and possible differences between each. This is important in order to observe the main linguistic and visual strategies used to persuade the audience to vote that party and to highlight the power of the politician. Critical discourse analysis tools will be helpful to identify the main discursive strategies employed to persuade the Irish population to vote in a certain direction. Van Leeuwen’s (2008) social actor theory will facilitate the understanding of how participants are represented in the corpus under analysis. Finally, the main tools of Kress and van Leeuwen’s visual grammar (2006) will be applied for the analysis of the images. The study reveals that politicians are represented in a consistently positive way, with status and formal appearance so that people are persuaded to vote for the party they represent because they trust them as political leaders. The study, thus, points out that the poster is a powerful tool used in election campaigns to highlight the power of political parties.