920 resultados para French as a second language
Resumo:
Consequence analysis is a key aspect of anchoring assessment of landslide impacts to present and long-term development planning. Although several approaches have been developed over the last decade, some of them are difficult to apply in practice, mainly because of the lack of valuable data on historical damages or on damage functions. In this paper, two possible consequence indicators based on a combination of descriptors of the exposure of the elements at risk are proposed in order to map the potential impacts of landslides and highlight the most vulnerable areas. The first index maps the physical vulnerability due to landslide; the second index maps both direct damage (physical, structural, functional) and indirect damage (socio-economic impacts) of landslide hazards. The indexes have been computed for the 200 km2 area of the Barcelonnette Basin (South French Alps), and their potential applications are discussed.
Resumo:
INTRODUCTION The orthographic depth hypothesis (Katz and Feldman, 1983) posits that different reading routes are engaged depending on the type of grapheme/phoneme correspondence of the language being read. Shallow orthographies with consistent grapheme/phoneme correspondences favor encoding via non-lexical pathways, where each grapheme is sequentially mapped to its corresponding phoneme. In contrast, deep orthographies with inconsistent grapheme/phoneme correspondences favor lexical pathways, where phonemes are retrieved from specialized memory structures. This hypothesis, however, lacks compelling empirical support. The aim of the present study was to investigate the impact of orthographic depth on reading route selection using a within-subject design. METHOD We presented the same pseudowords (PWs) to highly proficient bilinguals and manipulated the orthographic depth of PW reading by embedding them among two separated German or French language contexts, implicating respectively, shallow or deep orthography. High density electroencephalography was recorded during the task. RESULTS The topography of the ERPs to identical PWs differed 300-360 ms post-stimulus onset when the PWs were read in different orthographic depth context, indicating distinct brain networks engaged in reading during this time window. The brain sources underlying these topographic effects were located within left inferior frontal (German > French), parietal (French > German) and cingular areas (German > French). CONCLUSION Reading in a shallow context favors non-lexical pathways, reflected in a stronger engagement of frontal phonological areas in the shallow versus the deep orthographic context. In contrast, reading PW in a deep orthographic context recruits less routine non-lexical pathways, reflected in a stronger engagement of visuo-attentional parietal areas in the deep versus shallow orthographic context. These collective results support a modulation of reading route by orthographic depth.
Resumo:
Converging evidences from eye movement experiments indicate that linguistic contexts influence reading strategies. However, the question of whether different linguistic contexts modulate eye movements during reading in the same bilingual individuals remains unresolved. We examined reading strategies in a transparent (German) and an opaque (French) language of early, highly proficient French–German bilinguals: participants read aloud isolated French and German words and pseudo-words while the First Fixation Location (FFL), its duration and latency were measured. Since transparent linguistic contexts and pseudo-words would favour a direct grapheme/phoneme conversion, the reading strategy should be more local for German than for French words (FFL closer to the beginning) and no difference is expected in pseudo-words’ FFL between contexts. Our results confirm these hypotheses, providing the first evidence that the same individuals engage different reading strategy depending on language opacity, suggesting that a given brain process can be modulated by a given context.
Resumo:
Discourse connectives are often said to be language specific, and therefore not easily paired with a translation equivalent in a target language. However, few studies have assessed the magnitude and the causes of these divergences. In this paper, we provide an overview of the similarities and discrepancies between causal connectives in two typologically related languages: English and French. We first discuss two criteria used in the literature to account for these differences: the notion of domains of use and the information status of the cause segment. We then test the validity of these criteria through an empirical contrastive study of causal connectives in English and French, performed on a bidirectional corpus. Our results indicate that French and English connectives have only partially overlapping profiles and that translation equivalents are adequately predicted by these two criteria.
Resumo:
The goal of the present thesis was to investigate the production of code-switched utterances in bilinguals’ speech production. This study investigates the availability of grammatical-category information during bilingual language processing. The specific aim is to examine the processes involved in the production of Persian-English bilingual compound verbs (BCVs). A bilingual compound verb is formed when the nominal constituent of a compound verb is replaced by an item from the other language. In the present cases of BCVs the nominal constituents are replaced by a verb from the other language. The main question addressed is how a lexical element corresponding to a verb node can be placed in a slot that corresponds to a noun lemma. This study also investigates how the production of BCVs might be captured within a model of BCVs and how such a model may be integrated within incremental network models of speech production. In the present study, both naturalistic and experimental data were used to investigate the processes involved in the production of BCVs. In the first part of the present study, I collected 2298 minutes of a popular Iranian TV program and found 962 code-switched utterances. In 83 (8%) of the switched cases, insertions occurred within the Persian compound verb structure, hence, resulting in BCVs. As to the second part of my work, a picture-word interference experiment was conducted. This study addressed whether in the case of the production of Persian-English BCVs, English verbs compete with the corresponding Persian compound verbs as a whole, or whether English verbs compete with the nominal constituents of Persian compound verbs only. Persian-English bilinguals named pictures depicting actions in 4 conditions in Persian (L1). In condition 1, participants named pictures of action using the whole Persian compound verb in the context of its English equivalent distractor verb. In condition 2, only the nominal constituent was produced in the presence of the light verb of the target Persian compound verb and in the context of a semantically closely related English distractor verb. In condition 3, the whole Persian compound verb was produced in the context of a semantically unrelated English distractor verb. In condition 4, only the nominal constituent was produced in the presence of the light verb of the target Persian compound verb and in the context of a semantically unrelated English distractor verb. The main effect of linguistic unit was significant by participants and items. Naming latencies were longer in the nominal linguistic unit compared to the compound verb (CV) linguistic unit. That is, participants were slower to produce the nominal constituent of compound verbs in the context of a semantically closely related English distractor verb compared to producing the whole compound verbs in the context of a semantically closely related English distractor verb. The three-way interaction between version of the experiment (CV and nominal versions), linguistic unit (nominal and CV linguistic units), and relation (semantically related and unrelated distractor words) was significant by participants. In both versions, naming latencies were longer in the semantically related nominal linguistic unit compared to the response latencies in the semantically related CV linguistic unit. In both versions, naming latencies were longer in the semantically related nominal linguistic unit compared to response latencies in the semantically unrelated nominal linguistic unit. Both the analysis of the naturalistic data and the results of the experiment revealed that in the case of the production of the nominal constituent of BCVs, a verb from the other language may compete with a noun from the base language, suggesting that grammatical category does not necessarily provide a constraint on lexical access during the production of the nominal constituent of BCVs. There was a minimal context in condition 2 (the nominal linguistic unit) in which the nominal constituent was produced in the presence of its corresponding light verb. The results suggest that generating words within a context may not guarantee that the effect of grammatical class becomes available. A model is proposed in order to characterize the processes involved in the production of BCVs. Implications for models of bilingual language production are discussed.
Resumo:
This study focused on the relationship between students’ Advanced Placement (AP) English language performance and their subsequent college success. Targeted students were divided into three groups according to their AP English Language performance. Subsequent college success was measured by students’ first-year college GPA, retention to the second year, and institutional selectivity. The demographic characteristics of the three AP performance groups with regard to gender, ethnicity, and best language spoken are provided. Results indicated that, after controlling for students’ SAT scores as a measure of prior academic performance, AP English Language performance was positively related to all three measures of college success.
Resumo:
Como resultado de nuestra experiencia docente en la Facultad de Humanidades y Ciencias de la Educación de la UNLP impartiendo cursos de lectocomprensión en Lengua extranjera (LE) y con el objetivo siempre presente de mejorar las prácticas didácticas para lograr el resultado esperado del lector autónomo en Francés, decidimos encarar el presente trabajo de investigación. La hipótesis de partida de nuestro análisis es que la verificación de la comprensión lectora en LE podría hacerse a partir de resúmenes en Lengua Materna (LM), y que habría una estrecha relación entre las estrategias lectoras utilizadas en LM y las que se utilizan en LE. A partir de esto nos planteamos una serie de preguntas organizadas alrededor de tres ejes según los componentes a los que apuntan: cognitivo, metodológico-estratégico y discursivo. Tomamos como base un corpus de resúmenes en LM de un texto en LE realizado por un grupo voluntario de alumnos de la Cátedra Capacitación en Idioma Francés I, creando con ellos un dispositivo de observación y análisis conjunto compuesto por: una Encuesta Previa al comienzo del curso con respuestas en LM, una Encuesta previa a la lectura del texto por resumir con respuestas en LM, y una Encuesta post resumen con respuestas en LM. El trabajo está estructurado en seis partes: Introducción, Problemática, Marco teórico, Metodología de recolección de datos, Análisis del corpus y Conclusiones y perspectivas. Creemos que el presente trabajo constituye una reflexión y un punto de partida para el análisis de uno de los problemas planteados por la didáctica de la lectocomprensión LE en la universidad: las estrategias lectoras de los estudiantes de LE y más particularmente en Francés Lengua Extranjera.
Resumo:
Como resultado de nuestra experiencia docente en la Facultad de Humanidades y Ciencias de la Educación de la UNLP impartiendo cursos de lectocomprensión en Lengua extranjera (LE) y con el objetivo siempre presente de mejorar las prácticas didácticas para lograr el resultado esperado del lector autónomo en Francés, decidimos encarar el presente trabajo de investigación. La hipótesis de partida de nuestro análisis es que la verificación de la comprensión lectora en LE podría hacerse a partir de resúmenes en Lengua Materna (LM), y que habría una estrecha relación entre las estrategias lectoras utilizadas en LM y las que se utilizan en LE. A partir de esto nos planteamos una serie de preguntas organizadas alrededor de tres ejes según los componentes a los que apuntan: cognitivo, metodológico-estratégico y discursivo. Tomamos como base un corpus de resúmenes en LM de un texto en LE realizado por un grupo voluntario de alumnos de la Cátedra Capacitación en Idioma Francés I, creando con ellos un dispositivo de observación y análisis conjunto compuesto por: una Encuesta Previa al comienzo del curso con respuestas en LM, una Encuesta previa a la lectura del texto por resumir con respuestas en LM, y una Encuesta post resumen con respuestas en LM. El trabajo está estructurado en seis partes: Introducción, Problemática, Marco teórico, Metodología de recolección de datos, Análisis del corpus y Conclusiones y perspectivas. Creemos que el presente trabajo constituye una reflexión y un punto de partida para el análisis de uno de los problemas planteados por la didáctica de la lectocomprensión LE en la universidad: las estrategias lectoras de los estudiantes de LE y más particularmente en Francés Lengua Extranjera.
Resumo:
Como resultado de nuestra experiencia docente en la Facultad de Humanidades y Ciencias de la Educación de la UNLP impartiendo cursos de lectocomprensión en Lengua extranjera (LE) y con el objetivo siempre presente de mejorar las prácticas didácticas para lograr el resultado esperado del lector autónomo en Francés, decidimos encarar el presente trabajo de investigación. La hipótesis de partida de nuestro análisis es que la verificación de la comprensión lectora en LE podría hacerse a partir de resúmenes en Lengua Materna (LM), y que habría una estrecha relación entre las estrategias lectoras utilizadas en LM y las que se utilizan en LE. A partir de esto nos planteamos una serie de preguntas organizadas alrededor de tres ejes según los componentes a los que apuntan: cognitivo, metodológico-estratégico y discursivo. Tomamos como base un corpus de resúmenes en LM de un texto en LE realizado por un grupo voluntario de alumnos de la Cátedra Capacitación en Idioma Francés I, creando con ellos un dispositivo de observación y análisis conjunto compuesto por: una Encuesta Previa al comienzo del curso con respuestas en LM, una Encuesta previa a la lectura del texto por resumir con respuestas en LM, y una Encuesta post resumen con respuestas en LM. El trabajo está estructurado en seis partes: Introducción, Problemática, Marco teórico, Metodología de recolección de datos, Análisis del corpus y Conclusiones y perspectivas. Creemos que el presente trabajo constituye una reflexión y un punto de partida para el análisis de uno de los problemas planteados por la didáctica de la lectocomprensión LE en la universidad: las estrategias lectoras de los estudiantes de LE y más particularmente en Francés Lengua Extranjera.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based