31 resultados para Interpersonality in lexical

em Aston University Research Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We describe the case of a dysgraphic aphasic individual-S.G.W.-who, in writing to dictation, produced high rates of formally related errors consisting of both lexical substitutions and what we call morphological-compound errors involving legal or illegal combinations of morphemes. These errors were produced in the context of a minimal number of semantic errors. We could exclude problems with phonological discrimination and phonological short-term memory. We also excluded rapid decay of lexical information and/or weak activation of word forms and letter representations since S.G.W.'s spelling showed no effect of delay and no consistent length effects, but, instead, paradoxical complexity effects with segmental, lexical, and morphological errors that were more complex than the target. The case of S.G.W. strongly resembles that of another dysgraphic individual reported in the literature-D.W.-suggesting that this pattern of errors can be replicated across patients. In particular, both patients show unusual errors resulting in the production of neologistic compounds (e.g., "bed button" in response to "bed"). These patterns can be explained if we accept two claims: (a) Brain damage can produce both a reduction and an increase in lexical activation; and (b) there are direct connections between phonological and orthographic lexical representations (a third spelling route). We suggest that both patients are suffering from a difficulty of lexical selection resulting from excessive activation of formally related lexical representations. This hypothesis is strongly supported by S.G.W.'s worse performance in spelling to dictation than in written naming, which shows that a phonological input, activating a cohort of formally related lexical representations, increases selection difficulties. © 2014 Taylor & Francis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In a group of adult dyslexics word reading and, especially, word spelling are predicted more by what we have called lexical learning (tapped by a paired-associate task with pictures and written nonwords) than by phonological skills. Nonword reading and spelling, instead, are not associated with this task but they are predicted by phonological tasks. Consistently, surface and phonological dyslexics show opposite profiles on lexical learning and phonological tasks. The phonological dyslexics are more impaired on the phonological tasks, while the surface dyslexics are equally or more impaired on the lexical learning tasks. Finally, orthographic lexical learning explains more variation in spelling than in reading, and subtyping based on spelling returns more interpretable results than that based on reading. These results suggest that the quality of lexical representations is crucial to adult literacy skills. This is best measured by spelling and best predicted by a task of lexical learning. We hypothesize that lexical learning taps a uniquely human capacity to form new representations by recombining the units of a restricted set.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigated the ability to learn new words in a group of 22 adults with developmental dyslexia/dysgraphia and the relationship between their learning and spelling problems. We identified a deficit that affected the ability to learn both spoken and written new words (lexical learning deficit). There were no comparable problems in learning other kinds of representations (lexical/semantic and visual) and the deficit could not be explained in terms of more traditional phonological deficits associated with dyslexia (phonological awareness, phonological STM). Written new word learning accounted for further variance in the severity of the dysgraphia after phonological abilities had been partialled out. We suggest that lexical learning may be an independent ability needed to create lexical/formal representations from a series of independent units. Theoretical and clinical implications are discussed. © 2005 Psychology Press Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Description, on English political texts, difference between lexical direction and direction in context.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a statistical comparison of regional phonetic and lexical variation in American English. Both the phonetic and lexical datasets were first subjected to separate multivariate spatial analyses in order to identify the most common dimensions of spatial clustering in these two datasets. The dimensions of phonetic and lexical variation extracted by these two analyses were then correlated with each other, after being interpolated over a shared set of reference locations, in order to measure the similarity of regional phonetic and lexical variation in American English. This analysis shows that regional phonetic and lexical variation are remarkably similar in Modern American English.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper introduces a quantitative method for identifying newly emerging word forms in large time-stamped corpora of natural language and then describes an analysis of lexical emergence in American social media using this method based on a multi-billion word corpus of Tweets collected between October 2013 and November 2014. In total 29 emerging word forms, which represent various semantic classes, grammatical parts-of speech, and word formations processes, were identified through this analysis. These 29 forms are then examined from various perspectives in order to begin to better understand the process of lexical emergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper illustrates the role of world knowledge in comprehending and translating texts. A short news item, which displays world knowledge fairly implicitly in condensed lexical forms, was translated by students from English into German. It is shown that their translation strategies changed from a first draft which was rather close to the surface structure of the source text to a final version which took situational aspects, texttypological conventions and the different background knowledge of the respective addressees into account. Decisions on how much world knowledge has to be made explicit in the target text, however, must be based on the relevance principle. Consequences for teaching and for the notions of semantic knowledge and world knowledge are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our PhD study focuses on the role of aspectual marking in expressing simultaneity of events in Tunisian Arabic as a first language, French as a first language, as well as in French as a second language by Tunisian learners at different acquisitional stages. We examine how the explicit markers of on-goingness qa:’id and «en train de» in Tunisian Arabic and in French respectively are used to express this temporal relation, in competition with the simple forms, the prefixed verb form in Tunisian Arabic and the présent de l’indicatif in French. We use a complex verbal task of retelling simultaneous events sharing an interval on the time axis based on eight videos presenting two situations happening in parallel. Two types of simultaneity are exploited: perfect simultaneity (when the two situations are parallel to each other) and inclusion (one situation is framed by the second one). Our informants in French and in Tunisian Arabic have two profiles, highly educated and low educated speakers. We show that the participants’ response to the retelling task varies according to their profiles, and so does their use of the on-goingness devices in the expression of simultaneity. The differences observed between the two profile groups are explained by the degree to which the speakers have developed a habit of responding to tasks. This is a skill typically acquired during schooling. We notice overall that the use of qa:’id as well as of «en train de» is less frequent in the data than the use of the simple forms. However, qa:’id as well as «en train de» are employed to play discursive roles that go beyond the proposition level. We postulate that despite the shared features between Tunisian Arabic and French regarding marking the concept of on-goingness, namely the presence of explicit lexical, not fully grammaticalised markers competing with other non-marked forms, the way they are used in the discourse of simultaneous events shows clear differences. We explain that «en train de» plays a more contrastive role than qa:’id and its use in discourse obeys a stricter rule. In cases of the inclusion type of simultaneity, it is used to construe the ‘framing’ event that encloses the second event. In construing perfectly simultaneneous events, and when both «en train de» and présent de l’indicatif are used, the proposition with «en train de» generally precedes the proposition with présent de l’indicatif, and not the other way around. qa:id obeys, but to a less strict rule as it can be used interchangeably with the simple form regardless of the order of propositions. The contrastive analysis of French L1 and L2 reveals learners’ deviations from natives’ use of on-goingness devices. They generalise the use of «en train de» and apply different rules to the interaction of the different marked and unmarked forms in discourse. Learners do not master its role in discourse even at advanced stages of acquisition despite its possible emergence around the basic and intermediate varieties. We conclude that the native speakers’ use of «en train de» involves mastering its role at the macro-structure level. This feature, not explicitly available to learners in the input, might persistently present a challenge to L2 acquisition of the periphrasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This action research (AR) study explores an alternative approach to vocabulary instruction for low-proficiency university students: a change from targeting individual words from the general service list (West, 1953) to targeting frequent verb + noun collocations. A review of the literature indicated a focus on collocations instead of individual words could potentially address the students’ productive challenges with targeted vocabulary. Over the course of four reflective cycles, this thesis addresses three main aspects of collocation instruction. First, it examines if the students believe studying collocations is more useful than studying individual lexical items. Second, the thesis investigates whether a focus on collocations will lead to improvements in spoken fluency. This is tested through a comparison of a pre-intervention spoken assessment task with the findings from the same task completed 15 weeks later, after the intervention. Third, the thesis explores different procedures for the instructing of collocations under the classroom constraints of a university teaching context. In the first of the four reflective cycles, data is collected which indicates that the students believe a focus on collocations is superior to only teaching individual lexical items, that in the students’ opinion their productive abilities with the targeted structures has improved, and that delexicalized verb collocations are problematic for low-proficiency students. Reflective cycle two produces evidence indicating that productive tasks are superior to receptive tasks for fluency development. In reflective cycle three, productively challenging classroom tasks are investigated further and the findings indicate that tasks with higher productive demands result in greater improvements in spoken fluency. The fourth reflective cycle uses a different type of collocation list: frequent adjective + noun collocations. Despite this change, the findings remain consistent in that certain types of collocations are problematic for low-proficiency language learners and that the evidence shows productive tasks are necessary to improve the students’ spoken ability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates concreteness effects in tasks requiring short-term retention. Concreteness effects were assessed in serial recall, matching span, order reconstruction, and free recall. Each task was carried out both in a control condition and under articulatory suppression. Our results show no dissociation between tasks that do and do not require spoken output. This argues against the redintegration hypothesis according to which lexical-semantic effects in short-term memory arise only at the point of production. In contrast, concreteness effects were modulated by task demands that stressed retention of item versus order information. Concreteness effects were stronger in free recall than in serial recall. Suppression, which weakens phonological representations, enhanced the concreteness effect with item scoring. In a matching task, positive effects of concreteness occurred with open sets but not with closed sets of words. Finally, concreteness effects reversed when the task asked only for recall of word positions (as in the matching task), when phonological representations were weak (because of suppression), and when lexical semantic representations overactivated (because of closed sets). We interpret these results as consistent with a model where phonological representations are crucial for the retention of order, while lexical-semantic representations support maintenance of item identity in both input and output buffers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the case of a neologistic jargonaphasic and ask whether her target-related and abstruse neologisms are the result of a single deficit, which affects some items more severely than others, or two deficits: one to lexical access and the other to phonological encoding. We analyse both correct/incorrect performance and errors and apply both traditional and formal methods (maximum-likelihood estimation and model selection). All evidence points to a single deficit at the level of phonological encoding. Further characteristics are used to constrain the locus still further. V.S. does not show the type of length effect expected of a memory component, nor the pattern of errors associated with an articulatory deficit. We conclude that her neologistic errors can result from a single deficit at a level of phonological encoding that immediately follows lexical access where segments are represented in terms of their features. We do not conclude, however, that this is the only possible locus that will produce phonological errors in aphasia, or, indeed, jargonaphasia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Influential models of short-term memory have attributed the fact that short words are recalled better than longer words in serial recall (the length effect) to articulatory rehearsal. Crucial for this link is the finding that the length effect disappears under articulatory suppression. We show, instead, that, under suppression, the length effect is abolished or reversed for real words but remains robust for nonwords. The latter finding is demonstrated in a variety of conditions: with lists of three and four nonwords, with nonwords drawn from closed and open sets, with spoken and written presentation, and with written and spoken output. Our interpretation is that the standard length effect derives from the number of phonological units to be retained. The length effect is abolished or reversed under suppression because this condition encourages reliance on lexical-semantic representations. Using these representations, longer words can more easily be reconstructed from degraded phonology than shorter words. © 2005 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a progress report on a research path I first outlined in my contribution to “Words in Context: A Tribute to John Sinclair on his Retirement” (Heffer and Sauntson, 2000). Therefore, I first summarize that paper here, in order to provide the relevant background. The second half of the current paper consists of some further manual analyses, exploring various parameters and procedures that might assist in the design of an automated computational process for the identification of lexical sets. The automation itself is beyond the scope of the current paper.