883 resultados para Multimodal Biometrics
Resumo:
International conference presentations represent one of the biggest challenges for academics using English as a Lingua Franca (ELF). This paper aims to initiate exploration into the multimodal academic discourse of oral presentations, including the verbal, written, non-verbal material (NVM) and body language modes. It offers a Systemic Functional Linguistic (SFL) and multimodal framework of presentations to enhance mixed-disciplinary ELF academics' awareness of what needs to be taken into account to communicate effectively at conferences. The model is also used to establish evaluation criteria for the presenters' talks and to carry out a multimodal discourse analysis of four well-rated 20-min talks, two from the technical sciences and two from the social sciences in a workshop scenario. The findings from the analysis and interviews indicate that: (a) a greater awareness of the mode affordances and their combinations can lead to improved performances; (b) higher reliance on the visual modes can compensate for verbal deficiencies; and (c) effective speakers tend to use a variety of modes that often overlap but work together to convey specific meanings. However, firm conclusions cannot be drawn on the basis of workshop presentations, and further studies on the multimodal analysis of ‘real conferences’ within specific disciplines are encouraged.
Resumo:
This thesis explores the role of multimodality in language learners’ comprehension, and more specifically, the effects on students’ audio-visual comprehension when different orchestrations of modes appear in the visualization of vodcasts. Firstly, I describe the state of the art of its three main areas of concern, namely the evolution of meaning-making, Information and Communication Technology (ICT), and audio-visual comprehension. One of the most important contributions in the theoretical overview is the suggested integrative model of audio-visual comprehension, which attempts to explain how students process information received from different inputs. Secondly, I present a study based on the following research questions: ‘Which modes are orchestrated throughout the vodcasts?’, ‘Are there any multimodal ensembles that are more beneficial for students’ audio-visual comprehension?’, and ‘What are the students’ attitudes towards audio-visual (e.g., vodcasts) compared to traditional audio (e.g., audio tracks) comprehension activities?’. Along with these research questions, I have formulated two hypotheses: Audio-visual comprehension improves when there is a greater number of orchestrated modes, and students have a more positive attitude towards vodcasts than traditional audios when carrying out comprehension activities. The study includes a multimodal discourse analysis, audio-visual comprehension tests, and students’ questionnaires. The multimodal discourse analysis of two British Council’s language learning vodcasts, entitled English is GREAT and Camden Fashion, using ELAN as the multimodal annotation tool, shows that there are a variety of multimodal ensembles of two, three and four modes. The audio-visual comprehension tests were given to 40 Spanish students, learning English as a foreign language, after the visualization of vodcasts. These comprehension tests contain questions related to specific orchestrations of modes appearing in the vodcasts. The statistical analysis of the test results, using repeated-measures ANOVA, reveal that students obtain better audio-visual comprehension results when the multimodal ensembles are constituted by a greater number of orchestrated modes. Finally, the data compiled from the questionnaires, conclude that students have a more positive attitude towards vodcasts in comparison to traditional audio listenings. Results from the audio-visual comprehension tests and questionnaires prove the two hypotheses of this study.
Resumo:
This article analyses the way in which the subject English Language V of the degree English Studies (English Language and Literature) combines the development of the five skills (listening, speaking, reading, writing and interacting) with the use of multimodal activities and resources in the teaching-learning process so that students increase their motivation and acquire different social competences that will be useful for the labour market such as communication, cooperation, leadership or conflict management. This study highlights the use of multimodal materials (texts, videos, etc.) on social topics to introduce cultural aspects in a language subject and to deepen into the different social competences university students can acquire when they work with them. The study was guided by the following research questions: how can multimodal texts and resources contribute to the development of the five skills in a foreign language classroom? What are the main social competences that students acquire when the teaching-learning process is multimodal? The results of a survey prepared at the end of the academic year 2015-2016 point out the main competences that university students develop thanks to multimodal teaching. For its framework of analysis, the study draws on the main principles of visual grammar (Kress & van Leeuwen, 2006) where students learn how to analyse the main aspects in multimodal texts. The analysis of the different multimodal activities described in the article and the survey reveal that multimodality is useful for developing critical thinking, for bringing cultural aspects into the classroom and for working on social competences. This article will explain the successes and challenges of using multimodal texts with social content so that students can acquire social competences while learning content. Moreover, the implications of using multimodal resources in a language classroom to develop multiliteracies will be observed.
Resumo:
The aim of this research paper is to analyse the key political posters made for the campaigns of Irish political party Fianna Fáil framed in the Celtic Tiger (1997-2008) and post-Celtic Tiger years (2009-2012). I will then focus on the four posters of the candidate in the elections that took place in 1997, 2002, 2007 and 2011 with the intention of observing first how the leader is represented, and later on pinpointing the similarities and possible differences between each. This is important in order to observe the main linguistic and visual strategies used to persuade the audience to vote that party and to highlight the power of the politician. Critical discourse analysis tools will be helpful to identify the main discursive strategies employed to persuade the Irish population to vote in a certain direction. Van Leeuwen’s (2008) social actor theory will facilitate the understanding of how participants are represented in the corpus under analysis. Finally, the main tools of Kress and van Leeuwen’s visual grammar (2006) will be applied for the analysis of the images. The study reveals that politicians are represented in a consistently positive way, with status and formal appearance so that people are persuaded to vote for the party they represent because they trust them as political leaders. The study, thus, points out that the poster is a powerful tool used in election campaigns to highlight the power of political parties.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Naturwissenschaften, Dissertation, 2016
Resumo:
PURPOSE The purpose of this study was to identify SD-OCT changes that correspond to leakage on fluorescein (FA) and indocyanine angiography (ICGA) and evaluate effect of half-fluence photodynamic therapy (PDT) on choroidal volume in chronic central serous choroidoretinopathy (CSC). METHODS Retrospective analysis of patients with chronic CSC who had undergone PDT. Baseline FA and ICGA images were overlaid on SD-OCT to identify OCT correlates of FA or ICGA hyperfluorescence. Choroidal volume was evaluated in a subgroup of eyes before and after PDT. RESULTS Twenty eyes were evaluated at baseline, of which seven eyes had choroidal volume evaluations at baseline and 3 months following PDT. SD-OCT changes corresponding to FA hyperfluorescence were subretinal fluid (73%), RPE microrip (50%), RPE double-layer sign (31%), RPE detachment (15%), and RPE thickening (8%). ICGA hyperfluoresence was correlated in 93% with hyperreflective spots in the superficial choroid. Choroidal volume decreased from 9.35 ± 1.99 to 8.52 ± 1.92 and 8.04 ± 1.7 mm(3) (at 1 and 3 months post PDT, respectively, p ≤ 0.001). CONCLUSIONS We identified specific OCT findings that correlate with FA and ICGA leakage sites. SD-OCT is a valuable tool to localize CSC lesions and may be useful to guide PDT treatment. Generalized choroidal volume decrease occurs following PDT and extends beyond PDT treatment site.
Resumo:
Federal Highway Administration, Office of Safety and Traffic Operations Research and Development, McLean, Va.
Resumo:
Multigraphed.
Resumo:
I denna uppsats undersöker jag vilka modelläsare som skapas i två olika sorters mejl från Greenpeace i Sverige till personer som engagerar sig i organisationens arbete. Jag gör en multimodal textanalys med utgångspunkt i dialogism och sociosemiotisk teori, och jag använder analysmetoder från den systemisk-funktionella grammatiken. Resultatet visar att de två mejltyperna i det stora hela är mycket lika varandra, men att det finns vissa skillnader och att de olika mejltyperna därigenom konstruerar delvis olika modelläsare som verkliga läsare måste förhålla sig till. Modelläsarna skapas genom realiseringar av olika språkliga och visuella betydelseskapande resurser som t.ex. presuppositioner, processer, distans och språk- och bildhandlingar. Det gemensamma för modelläsarna är att de sympatiserar med Greenpeace, har en aktiv aktörsroll och har en nära och jämlik relation till organisationen.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
THE RIGORS OF ESTABLISHING INNATENESS and domain specificity pose challenges to adaptationist models of music evolution. In articulating a series of constraints, the authors of the target articles provide strategies for investigating the potential origins of music. We propose additional approaches for exploring theories based on exaptation. We discuss a view of music as a multimodal system of engaging with affect, enabled by capacities of symbolism and a theory of mind.
Resumo:
A vision of the future of intraoperative monitoring for anesthesia is presented-a multimodal world based on advanced sensing capabilities. I explore progress towards this vision, outlining the general nature of the anesthetist's monitoring task and the dangers of attentional capture. Research in attention indicates different kinds of attentional control, such as endogenous and exogenous orienting, which are critical to how awareness of patient state is maintained, but which may work differently across different modalities. Four kinds of medical monitoring displays are surveyed: (1) integrated visual displays, (2) head-mounted displays, (3) advanced auditory displays and (4) auditory alarms. Achievements and challenges in each area are outlined. In future research, we should focus more clearly on identifying anesthetists' information needs and we should develop models of attention in different modalities and across different modalities that are more capable of guiding design. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper reflects upon our attempts to bring a participatory design approach to design research into interfaces that better support dental practice. The project brought together design researchers, general and specialist dental practitioners, the CEO of a dental software company and, to a limited extent, dental patients. We explored the potential for deployment of speech and gesture technologies in the challenging and authentic context of dental practices. The paper describes the various motivations behind the project, the negotiation of access and the development of the participant relationships as seen from the researchers' perspectives. Conducting participatory design sessions with busy professionals demands preparation, improvisation, and clarity of purpose. The paper describes how we identified what went well and when to shift tactics. The contribution of the paper is in its description of what we learned in bringing participatory design principles to a project that spanned technical research interests, commercial objectives and placing demands upon the time of skilled professionals. Copyright © 2010 ACM, Inc
Resumo:
This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.