880 resultados para Translating and interpreting
Resumo:
Interpreting others’ emotions is theoretically foundational for children’s social competence, yet little research contrasts Emotion Understanding (EU) types against their theoretical correlates. This study investigated kindergartners’ situationistic EU (attributing emotions based on external events) and mentalistic EU (attributing emotions from others’ mental states) in relation to Theory of Mind (ToM) and social skills, as rated by parents and teachers. The EU measures were expected to have low associations with one another and to relate differently to ToM and select social skills. Mentalistic EU was expected to be an important predictor of teacher-rated social skills. Results supported the hypothesis that mentalistic EU and situationistic EU are distinct constructs. However, both relate to ToM. Furthermore, while ToM and situationistic EU variables were included in the regression model, only vocabulary and mentalistic EU were significant predictors for teacher-rated social skills. Results indicate the importance of mentalistic EU in aspects of kindergartners’ social competence.
Resumo:
The relevance of explicit instruction has been well documented in SLA research. Despite numerous positive findings, however, the issue continues to engage scholars worldwide. One issue that was largely neglected in previous empirical studies - and one that may be crucial for the effectiveness of explicit instruction - is the timing and integration of rules and practice. The present study investigated the extent to which grammar explanation (GE) before practice, grammar explanation during practice, and individual differences impact the acquisition of L2 declarative and procedural knowledge of two grammatical structures in Spanish. In this experiment, 128 English-speaking learners of Spanish were randomly assigned to four experimental treatments and completed comprehension-based task-essential practice for interpreting object-verb (OV) and ser/estar (SER) sentences in Spanish. Results confirmed the predicted importance of timing of GE: participants who received GE during practice were more likely to develop and retain their knowledge successfully. Results further revealed that the various combinations of rules and practice posed differential task demands on the learners and consequently drew on language aptitude and WM to a different extent. Since these correlations between individual differences and learning outcomes were the least observed in the conditions that received GE during practice, we argue that the suitable integration of rules and practice ameliorated task demands, reducing the burden on the learner, and accordingly mitigated the role of participants’ individual differences. Finally, some evidence also showed that the comprehension practice that participants received for the two structures was not sufficient for the formation of solid productive knowledge, but was more effective for the OV than for the SER construction.
Resumo:
Doutoramento em Gestão
Resumo:
The complex three-dimensional (3-D) structure of tropical forests generates a diversity of light environments for canopy and understory trees. Understanding diurnal and seasonal changes in light availability is critical for interpreting measurements of net ecosystem exchange and improving ecosystem models. Here, we used the Discrete Anisotropic Radiative Transfer (DART) model to simulate leaf absorption of photosynthetically active radiation (lAPAR) for an Amazon forest. The 3-D model scene was developed from airborne lidar data, and local measurements of leaf reflectance, aerosols, and PAR were used to model lAPAR under direct and diffuse illumination conditions. Simulated lAPAR under clear-sky and cloudy conditions was corrected for light saturation effects to estimate light utilization, the fraction of lAPAR available for photosynthesis. Although the fraction of incoming PAR absorbed by leaves was consistent throughout the year (0.80?0.82), light utilization varied seasonally (0.67?0.74), with minimum values during the Amazon dry season. Shadowing and light saturation effects moderated potential gains in forest productivity from increasing PAR during dry-season months when the diffuse fraction from clouds and aerosols was low. Comparisons between DART and other models highlighted the role of 3-D forest structure to account for seasonal changes in light utilization. Our findings highlight how directional illumination and forest 3-D structure combine to influence diurnal and seasonal variability in light utilization, independent of further changes in leaf area, leaf age, or environmental controls on canopy photosynthesis. Changing illumination geometry constitutes an alternative biophysical explanation for observed seasonality in Amazon forest productivity without changes in canopy phenology.
Resumo:
Decades of costly failures in translating drug candidates from preclinical disease models to human therapeutic use warrant reconsideration of the priority placed on animal models in biomedical research. Following an international workshop attended by experts from academia, government institutions, research funding bodies, and the corporate and nongovernmental organisation (NGO) sectors, in this consensus report, we analyse, as case studies, five disease areas with major unmet needs for new treatments. In view of the scientifically driven transition towards a human pathway-based paradigm in toxicology, a similar paradigm shift appears to be justified in biomedical research. There is a pressing need for an approach that strategically implements advanced, human biology-based models and tools to understand disease pathways at multiple biological scales. We present recommendations to help achieve this.
Resumo:
Globalisation has transformed “independence” into, at best, “inter-dependence”. In Latin American film, this process has been experienced as a decline in the national productions, now usually co-productions, and a tendency towards the self-exoticising as films cater for a festival-circuit global audience; similarly, theatrical exhibition takes place in one of a handful of the global multiplex complexes. Moreover, narrative film itself has long been regarded as inherently “dependent”, on the conservative sectors that have provided its finance, with the word “independent” referring to authorial features only. However, the very same processes that have allowed for such an unprecedented corporate control of these film industries have also spawned a parallel network of local, regional and national filmmaking, distribution and exhibition through digital media. From the “Mi Cine” project in Mexico to the “Cine Piquetero” in Argentina, digital filmmaking is empowering viewers and restoring agency to local filmmakers. In this paper I argue for this understanding of “independence” in the contemporary cinematic spheres of Latin America: the re-appropriation, amidst the transnationalism of the day, of the democratising potential of cinema that Walter Benjamin once thought was inherent to the medium.
Resumo:
Aim: To present the qualitative findings from a study on the development of scheme(s) to give evidence of maintenance of professional competence for nurses and midwives. Background: Key issues in maintenance of professional competence include notions of self- assessment, verification of engagement and practice hours, provision of an evidential record, the role of the employer and articulation of possible consequences for non-adherence with the requirements. Schemes to demonstrate the maintenance of professional competence have application to nurses, midwives and regulatory bodies and healthcare employers worldwide. Design: A mixed methods approach was used. This included an online survey of nurses and midwives and focus groups with nurses and midwives and other key stakeholders. The qualitative data are reported in this study. Methods: Focus groups were conducted among a purposive sample of nurses, midwives and key stakeholders from January–May 2015. A total of 13 focus groups with 91 participants contributed to the study. Findings: Four major themes were identified: Definitions and Characteristics of Competence; Continuing Professional Development and Demonstrating Competence; Assessment of Competence; The Nursing and Midwifery Board of Ireland and employers as regulators and enablers of maintaining professional competence. Conclusion: Competence incorporates knowledge, skills, attitudes, professionalism, application of evidence and translating learning into practice. It is specific to the nurse's/midwife's role, organizational needs, patient's needs and the individual nurse's/midwife's learning needs. Competencies develop over time and change as nurses and midwives work in different practice areas. Thus, role-specific competence is linked to recent engagement in practice.
Resumo:
BACKGROUND: Conceptualization of quality of care - in terms of what individuals, groups and organizations include in their meaning of quality, is an unexplored research area. It is important to understand how quality is conceptualised as a means to successfully implement improvement efforts and bridge potential disconnect in language about quality between system levels, professions, and clinical services. The aim is therefore to explore and compare conceptualization of quality among national bodies (macro level), senior hospital managers (meso level), and professional groups within clinical micro systems (micro level) in a cross-national study. METHODS: This cross-national multi-level case study combines analysis of national policy documents and regulations at the macro level with semi-structured interviews (383) and non-participant observation (803 hours) of key meetings and shadowing of staff at the meso and micro levels in ten purposively sampled European hospitals (England, the Netherlands, Portugal, Sweden, and Norway). Fieldwork at the meso and micro levels was undertaken over a 12-month period (2011-2012) and different types of micro systems were included (maternity, oncology, orthopaedics, elderly care, intensive care, and geriatrics). RESULTS: The three quality dimensions clinical effectiveness, patient safety, and patient experience were incorporated in macro level policies in all countries. Senior hospital managers adopted a similar conceptualization, but also included efficiency and costs in their conceptualization of quality. 'Quality' in the forms of measuring indicators and performance management were dominant among senior hospital managers (with clinical and non-clinical background). The differential emphasis on the three quality dimensions was strongly linked to professional roles, personal ideas, and beliefs at the micro level. Clinical effectiveness was dominant among physicians (evidence-based approach), while patient experience was dominant among nurses (patient-centered care, enough time to talk with patients). Conceptualization varied between micro systems depending on the type of services provided. CONCLUSION: The quality conceptualization differed across system levels (macro-meso-micro), among professional groups (nurses, doctors, managers), and between the studied micro systems in our ten sampled European hospitals. This entails a managerial alignment challenge translating macro level quality definitions into different local contexts.
Resumo:
This thesis is about young students’ writing in school mathematics and the ways in which this writing is designed, interpreted and understood. Students’ communication can act as a source from which teachers can make inferences regarding students’ mathematical knowledge and understanding. In mathematics education previous research indicates that teachers assume that the process of interpreting and judging students’ writing is unproblematic. The relationship between what students’ write, and what they know or understand, is theoretical as well as empirical. In an era of increased focus on assessment and measurement in education it is necessary for teachers to know more about the relationship between communication and achievement. To add to this knowledge, the thesis has adopted a broad approach, and the thesis consists of four studies. The aim of these studies is to reach a deep understanding of writing in school mathematics. Such an understanding is dependent on examining different aspects of writing. The four studies together examine how the concept of communication is described in authoritative texts, how students’ writing is viewed by teachers and how students make use of different communicational resources in their writing. The results of the four studies indicate that students’ writing is more complex than is acknowledged by teachers and authoritative texts in mathematics education. Results point to a sophistication in students’ approach to the merging of the two functions of writing, writing for oneself and writing for others. Results also suggest that students attend, to various extents, to questions regarding how, what and for whom they are writing in school mathematics. The relationship between writing and achievement is dependent on students’ ability to have their writing reflect their knowledge and on teachers’ thorough knowledge of the different features of writing and their awareness of its complexity. From a communicational perspective the ability to communicate [in writing] in mathematics can and should be distinguished from other mathematical abilities. By acknowledging that mathematical communication integrates mathematical language and natural language, teachers have an opportunity to turn writing in mathematics into an object of learning. This offers teachers the potential to add to their assessment literacy and offers students the potential to develop their communicational ability in order to write in a way that better reflects their mathematical knowledge.
Resumo:
More than 125 years after its foundation (*), the Pasteur Institute is still one of the world’s largest, best known and most powerful biomedical research institutions
Resumo:
One of the main process features under study in Cognitive Translation & Interpreting Studies (CTIS) is the chronological unfolding of the tasks. The analyses of time spans in translation have been conceived in two ways: (1) studying those falling between text units of different sizes: words, phrases, sentences, and paragraphs; (2) setting arbitrary time span thresholds to explore where do they fall in the text, whether between text units or not. Writing disfluencies may lead to comprehensive insights into the cognitive activities involved in typing while translating. Indeed, long time spans are often taken as hints that cognitive resources have been subtracted from typing and devoted to other activities, such as planning, evaluating, etc. This exploratory, pilot study combined both approaches to seek potential general tendencies and contrasts in informants’ inferred mental processes when performing different writing tasks, through the analysis of their behaviors, as keylogged. The study tasks were retyping, monolingual free writing, translation, revision and a multimodal task—namely, monolingual text production based on an infographic leaflet. Task logs were chunked, and shorter time spans, including those within words, were analyzed following the Task Segment Framework (Muñoz & Apfelthaler, in press). Finally, time span analysis was combined with the analysis of the texts as to their lexical density, type-token ratio and word frequency. Several previous results were confirmed, and some others were surprising. Time spans in free writing were longer between paragraphs and sentences, possibly hinting at planning and, in translation, between clauses and words, suggesting more cognitive activities at these levels. On the other hand, the infographic was expected to facilitate the writing process, but most time spans were longer than in both free writing and translation. Results of the multimodal task and some other results suggest venues for further research.
Resumo:
It has recently been noticed that interpreters tend to converge with their speakers’ emotions under a process known as emotional contagion. Emotional contagion still represents an underinvestigated aspect of interpreting and the few studies on this topic have tended to focus more on simultaneous interpreting rather than consecutive interpreting. Korpal & Jasielska (2019) compared the emotional effects of one emotional and one neutral text on interpreters in simultaneous interpreting and found that interpreters tended to converge emotionally with the speaker more when interpreting the emotional text. This exploratory study follows their procedures to study the emotional contagion potentially caused by two texts among interpreters in consecutive interpreting: one emotionally neutral text and one negatively-valenced text, this last containing 44 negative words as triggers. Several measures were triangulated to determine whether the triggers in the negatively-valenced text could prompt a stronger emotional contagion in the consecutive interpreting of that text as compared to the consecutive interpreting of the emotionally neutral text, which contained no triggers—namely, the quality of the interpreters’ delivery; their heart rate variability values as collected with EMPATICA E4 wristbands; the analysis of their acoustic variations (i.e., disfluencies and rhetorical strategies); their linguistic and emotional management of the triggers; and their answers to the Italian version of the Positive and Negative Affect Schedule (PANAS) self-report questionnaire. Results showed no statistically significant evidence of an emotional contagion evoked by the triggers in the consecutive interpreting of the negative text as opposed to the consecutive interpreting of the neutral text. On the contrary, interpreters seemed to be more at ease while interpreting the negative text. This surprising result, together with other results of this project, suggests venues for further research.
Resumo:
Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
Frame. Assessing the difficulty of source texts and parts thereof is important in CTIS, whether for research comparability, for didactic purposes or setting price differences in the market. In order to empirically measure it, Campbell & Hale (1999) and Campbell (2000) developed the Choice Network Analysis (CNA) framework. Basically, the CNA’s main hypothesis is that the more translation options (a group of) translators have to render a given source text stretch, the higher the difficulty of that text stretch will be. We will call this the CNA hypothesis. In a nutshell, this research project puts the CNA hypothesis to the test and studies whether it does actually measure difficulty. Data collection. Two groups of participants (n=29) of different profiles and from two universities in different countries had three translation tasks keylogged with Inputlog, and filled pre- and post-translation questionnaires. Participants translated from English (L2) into their L1s (Spanish or Italian), and worked—first in class and then at home—using their own computers, on texts ca. 800–1000 words long. Each text was translated in approximately equal halves in two 1-hour sessions, in three consecutive weeks. Only the parts translated at home were considered in the study. Results. A very different picture emerged from data than that which the CNA hypothesis might predict: there was no prevalence of disfluent task segments when there were many translation options, nor was a prevalence of fluent task segments associated to fewer translation options. Indeed, there was no correlation between the number of translation options (many and few) and behavioral fluency. Additionally, there was no correlation between pauses and both behavioral fluency and typing speed. The discussed theoretical flaws and the empirical evidence lead to the conclusion that the CNA framework does not and cannot measure text and translation difficulty.