936 resultados para Lexical error


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette recherche vise à décrire 1) les erreurs lexicales commises en production écrite par des élèves francophones de 3e secondaire et 2) le rapport à l’erreur lexicale d’enseignants de français (conception de l’erreur lexicale, pratiques d’évaluation du vocabulaire en production écrite, modes de rétroaction aux erreurs lexicales). Le premier volet de la recherche consiste en une analyse d’erreurs à trois niveaux : 1) une description linguistique des erreurs à l’aide d’une typologie, 2) une évaluation de la gravité des erreurs et 3) une explication de leurs sources possibles. Le corpus analysé est constitué de 300 textes rédigés en classe de français par des élèves de 3e secondaire. L’analyse a révélé 1144 erreurs lexicales. Les plus fréquentes sont les problèmes sémantiques (30%), les erreurs liées aux propriétés morphosyntaxiques des unités lexicales (21%) et l’utilisation de termes familiers (17%). Cette répartition démontre que la moitié des erreurs lexicales sont attribuables à une méconnaissance de propriétés des mots autres que le sens et la forme. L’évaluation de la gravité des erreurs repose sur trois critères : leur acceptation linguistique selon les dictionnaires, leur impact sur la compréhension et leur degré d’intégration à l’usage. Les problèmes liés aux registres de langue sont généralement ceux qui sont considérés comme les moins graves et les erreurs sémantiques représentent la quasi-totalité des erreurs graves. Le troisième axe d’analyse concerne la source des erreurs et fait ressortir trois sources principales : l’influence de la langue orale, la proximité sémantique et la parenté formelle entre le mot utilisé et celui visé. Le second volet de la thèse concerne le rapport des enseignants de français à l’erreur lexicale et repose sur l’analyse de 224 rédactions corrigées ainsi que sur une série de huit entrevues menées avec des enseignants de 3e secondaire. Lors de la correction, les enseignants relèvent surtout les erreurs orthographiques ainsi que celles relevant des propriétés morphosyntaxiques des mots (genre, invariabilité, régime), qu’ils classent parmi les erreurs de grammaire. Les erreurs plus purement lexicales, c’est-à-dire les erreurs sémantiques, l’emploi de termes familiers et les erreurs de collocation, demeurent peu relevées, et les annotations des enseignants concernant ces types d’erreurs sont vagues et peu systématiques, donnant peu de pistes aux élèves pour la correction. L’évaluation du vocabulaire en production écrite est toujours soumise à une appréciation qualitative, qui repose sur l’impression générale des enseignants plutôt que sur des critères précis, le seul indicateur clair étant la répétition. Les explications des enseignants concernant les erreurs lexicales reposent beaucoup sur l’intuition, ce qui témoigne de certaines lacunes dans leur formation en lien avec le vocabulaire. Les enseignants admettent enseigner très peu le vocabulaire en classe au secondaire et expliquent ce choix par le manque de temps et d’outils adéquats. L’enseignement du vocabulaire est toujours subordonné à des tâches d’écriture ou de lecture et vise davantage l’acquisition de mots précis que le développement d’une réelle compétence lexicale.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embodied theories of cognition propose that neural substrates used in experiencing the referent of a word, for example perceiving upward motion, should be engaged in weaker form when that word, for example ‘rise’, is comprehended. Motivated by the finding that the perception of irrelevant background motion at near-threshold, but not supra-threshold, levels interferes with task execution, we assessed whether interference from near-threshold background motion was modulated by its congruence with the meaning of words (semantic content) when participants completed a lexical decision task (deciding if a string of letters is a real word or not). Reaction times for motion words, such as ‘rise’ or ‘fall’, were slower when the direction of visual motion and the ‘motion’ of the word were incongruent — but only when the visual motion was at nearthreshold levels. When motion was supra-threshold, the distribution of error rates, not reaction times, implicated low-level motion processing in the semantic processing of motion words. As the perception of near-threshold signals is not likely to be influenced by strategies, our results support a close contact between semantic information and perceptual systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research tests the hypothesis that knowledge of derivational morphology facilitates vocabulary acquisition in beginning adult second language learners. Participants were mono-lingual English-speaking college students aged 18 years and older enrolled inintroductory Spanish courses. Knowledge of Spanish derivational morphology was tested through the use of a forced-choice translation task. Spanish lexical knowledge was measured by a translation task using direct translation (English word) primes and conceptual (picture) primes. A 2x2x2 mixed factor ANOVA examined the relationships between morphological knowledge (strong, moderate), error type (form-based, conceptual), and prime type (direct translation, picture). The results are consistent with the existence of a relationship between knowledge of derivational morphology andacquisition of second language vocabulary. Participants made more conceptually-based errors than form-based errors F (1,22)=7.744, p=.011. This result is consistent with Clahsen & Felser’s (2006) and Ullman’s (2004) models of second language processing. Additionally, participants with Strong morphological knowledge made fewer errors onthe lexical knowledge task than participants with Moderate morphological knowledge t(23)=-2.656, p=.014. I suggest future directions to clarify the relationship between morphological knowledge and lexical development in adult second language learners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"All the tablets here published form part of the Nippur collections now in the University Museum of the University of Pennsylvania."--Pref.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single word production requires that phoneme activation is maintained while articulatory conversion is taking place. Word serial recall, connected speech and non-word production (repetition and spelling) are all assumed to involve a phonological output buffer. A crucial question is whether the same memory resources are also involved in single word production. We investigate this question by assessing length and positional effects in the single word repetition and reading of six aphasic patients. We expect a damaged buffer to result in error rates per phoneme which increase with word length and in position effects. Although our patients had trouble with phoneme activation (they made mainly errors of phoneme selection), they did not show the effects expected from a buffer impairment. These results show that phoneme activation cannot be automatically equated with a buffer. We hypothesize that the phonemes of existing words are kept active though permanent links to the word node. Thus, the sustained activation needed for their articulation will come from the lexicon and will have different characteristics from the activation needed for the short-term retention of an unbound set of units. We conclude that there is no need and no evidence for a phonological buffer in single word production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

77

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a characterization of the lexical competence (vocabulary knowledge and use) of students learning to read in EFL in a public university in São Paulo state. Although vocabulary has been consistently cited as one of the EFL reader´s main source of difficulty, there is no data in the literature which shows the extent of the difficulties. The data for this study is part of a previous research, which investigates, from the perspective of an interactive model of reading, the relationship between lexical competence and EFL reading comprehension. Quantitative as well as qualitative data was considered. For this study, the quantitative data is the product of vocabulary tests of 49 subjects while the qualitative data comprises pause protocols of three subjects, with levels of reading ability ranging from good to poor, selected upon their performance in the quantitative study. A rich concept of vocabulary knowledge was adapted and used for the development of vocabulary tests and analysis of protocols. The results on both studies show, with a few exceptions, the lexical competence of the group to be vague and imprecise in two dimensions: quantitative (number of known words or vocabulary size) and qualitative (depth or width of this knowledge). Implications for the teaching of reading in a foreign context are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nine individuals with complex language deficits following left-hemisphere cortical lesions and a matched control group (n 5 9) performed speeded lexical decisions on the third word of auditory word triplets containing a lexical ambiguity. The critical conditions were concordant (e.g., coin–bank–money), discordant (e.g., river–bank–money), neutral (e.g., day–bank– money), and unrelated (e.g., river–day–money). Triplets were presented with an interstimulus interval (ISI) of 100 and 1250 ms. Overall, the left-hemisphere-damaged subjects appeared able to exhaustively access meanings for lexical ambiguities rapidly, but were unable to reduce the level of activation for contextually inappropriate meanings at both short and long ISIs, unlike control subjects. These findings are consistent with a disruption of the proposed role of the left hemisphere in selecting and suppressing meanings via contextual integration and a sparing of the right-hemisphere mechanisms responsible for maintaining alternative meanings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.