927 resultados para Multilingual lexical
Resumo:
Chinese deep dyslexia is an important type of reading disorder that attracted many research interests. In the current thesis, three studies concerning Chinese deep dyslexia were performed: (1) clinical dominating and incidental symptoms were collected and their relationships were analyzed, and error scores of different character and word types compared; to verify typical reading model of alphabetic readers, and to develope a novel model for reading Chinese scripts; (2) based on these results, further neuropsychological analysis on the basic rules of lexical-semantic system and semantic distance were employed; (3) rehabilitation scheme were shaped to verify our research results. With cognitive neuropsychological methods, this study was mainly focused on deep dyslexic patients with brain impairment. The results were compared with those of normal people on rapid reading. Computer emulation was also used to describe reading process of patients. Both group analysis and case study were carried out. This study for the first time systematically investigated the clinical symptoms of Chinese deep dyslexia. A novel model was developed with a hypothesis that the sublexical pathway is composed of two parallel pathways: the phonetic sublexical pathway and the semantic sublexical pathway. Two characteristics of Chinese deep dyslexia were found compared with alphabetic deep dyslexia: (1) having no distinct word class effect and imagination effect; (2) the organization of Chinese lexical-semantic system has correlation with construct regulation, imagination and splitability of characters. Evoking of semantic correlation is stronger than phonetic correlation.
Resumo:
A number of functional neuroimaging studies with skilled readers consistently showed activation to visual words in the left mid-fusiform cortex in occipitotemporal sulcus (LMFC-OTS). Neuropsychological studies also showed that lesions at left ventral occipitotemporal areas result in impairment in visual word processing. Based on these empirical observations and some theoretical speculations, a few researchers postulated that the LMFC-OTS is responsible for instant parallel and holistic extraction of the abstract representation of letter strings, and labeled this piece of cortex as “visual word form area” (VWFA). Nonetheless, functional neuroimaging studies alone is basically a correlative rather than causal approach, and lesions in the previous studies were typically not constrained within LMFC-OTS but also involving other brain regions beyond this area. Given these limitations, it remains unanswered for three fundamental questions: is LMFC-OTS necessary for visual word processing? is this functionally selective for visual word processing while unnecessary for processing of non-visual word stimuli? what are its function properties in visual word processing? This thesis aimed to address these questions through a series of neuropsychological, anatomical and functional MRI experiments in four patients with different degrees of impairments in the left fusiform gyrus. Necessity: Detailed analysis of anatomical brain images revealed that the four patients had differential foci of brain infarction. Specifically, the LMFC-OTS was damaged in one patient, while it remained intact in the other three. Neuropsychological experiments showed that the patient with lesions in the LMFC-OTS had severe impairments in reading aloud and recognizing Chinese characters, i.e., pure alexia. The patient with intact LMFC-OTS but information from the left visual field (LVF) was blocked due to lesions in the splenium of corpus callosum, showed impairment in Chinese characters recognition when the stimuli were presented in the LVF but not in the RVF, i.e. left hemialexia. In contrast, the other two patients with intact LMFC-OTS had normal function in processing Chinese characters. The fMRI experiments demonstrated that there was no significant activation to Chinese characters in the LMFC-OTS of the pure alexic patient and of the patient with left hemialexia when the stimuli were presented in the LVF. On the other hand, this patient, when Chinese characters were presented in right visual field, and the other two with intact LMFC-OTS had activation in the LMFC-OTS. These results together point to the necessity of the LMFC-OTS for Chinese character processing. Selectivity: We tested selectivity of the LMFC-OTS for visual word processing through systematically examining the patients’ ability for processing visual vs. auditory words, and word vs. non-word visual stimuli, such as faces, objects and colors. Results showed that the pure alexic patients could normally process auditory words (expression, understanding and repetition of orally presented words) and non-word visual stimuli (faces, objects, colors and numbers). Although the patient showed some impairments in naming faces, objects and colors, his performance scores were only slightly lower or not significantly different relative to those of the patients with intact LMFC-OTS. These data provide compelling evidence that the LMFC-OTS is not requisite for processing non-visual word stimuli, thus has selectivity for visual word processing. Functional properties: With tasks involving multiple levels and aspects of word processing, including Chinese character reading, phonological judgment, semantic judgment, identity judgment of abstract visual word representation, lexical decision, perceptual judgment of visual word appearance, and dictation, copying, voluntary writing, etc., we attempted to reveal the most critical dysfunction caused by damage in the LMFC-OTS, thus to clarify the most essential function of this region. Results showed that in addition to dysfunctions in Chinese character reading, phonological and semantic judgment, the patient with lesions at LMFC-OTS failed to judge correctly whether two characters (including compound and simple characters) with different surface features (e.g., different fonts, printed vs. handwritten vs. calligraphy styles, simplified characters vs. traditional characters, different orientations of strokes or whole characters) had the same abstract representation. The patient initially showed severe impairments in processing both simple characters and compound characters. He could only copy a compound character in a stroke-by-stroke manner, but not by character-by-character or even by radical-by-radical manners. During the recovery process, namely five months later, the patient could complete the abstract representation tasks of simple characters, but showed no improvement for compound characters. However, he then could copy compound characters in a radical-by-radical manner. Furthermore, it seems that the recovery of copying paralleled to that of judgment of abstract representation. These observations indicate that lesions of the LMFC-OTS in the pure alexic patients caused several damage in the ability of extracting the abstract representation from lower level units to higher level units, and the patient had especial difficulty to extract the abstract representation of whole character from its secondary units (e.g., radicals or single characters) and this ability was resistant to recover from impairment. Therefore, the LMFC-OTS appears to be responsible for the multilevel (particularly higher levels) abstract representations of visual word form. Successful extraction seems independent on access to phonological and semantic information, given the alexic patient showed severe impairments in reading aloud and semantic processing on simple characters while maintenance of intact judgment on their abstract representation. However, it is also possible that the interaction between the abstract representation and its related information e.g. phonological and semantic information was damaged as well in this patient. Taken together, we conclude that: 1) the LMFC-OTS is necessary for Chinese character processing, 2) it is selective for Chinese character processing, and 3) its critical function is to extract multiple levels of abstract representation of visual word and possibly to transmit it to phonological and semantic systems.
Resumo:
In current days, many companies have carried out their branding strategies, because strong brand usually provides confidence and reduce risks to its consumers. No matter what a brand is based on tangible products or services, it will possess the common attributes of this category, and it also has its unique attributes. Brand attribute is defined as descriptive features, which are intrinsic characteristics, values or benefits endowed by users of the product or service (Keller, 1993; Romaniuk, 2003). The researches on models of brand multi-attributes are one of the most studied areas of consumer psychology (Werbel, 1978), and attribute weight is one of its key pursuits. Marketing practitioners also paid much attention to evaluations of attributes. Because those evaluations are relevant to the competitiveness and the strategies of promotion and new product development of the company (Green & Krieger, 1995). Then, how brand attributes correlate with weight judgments? And what features the attribute judgment reaction? Especially, what will feature the attribute weight judgment process of consumer who is facing the homogeneity of brands? Enlightened by the lexical hypothesis of researches on personality traits of psychology, this study choose search engine brands as the subject and adopt reaction time, which has been introduced into multi-attributes decision making by many researchers. Researches on independence of affect and cognition and on primacy of affect have cued us that we can categorize brand attributes into informative and affective ones. Meanwhile, Park has gone further to differentiate representative and experiential with functional attributes. This classification reflects the trend of emotion-branding and brand-consumer relationship. Three parts compose the research: the survey to collect attribute words, experiment one on affective primacy and experiment two on correlation between weight judgment and reaction. The results are as follow: In experiment one, we found: (1) affect words are not rated significantly from cognitive attributes, but affect words are responded faster than cognitive ones; (2) subjects comprehend and respond in different ways to functional attribute words and to representative and experiential words. In experiment two, we fund: (1) a significant negative correlation between attributes weight judgment and reaction time; (2) affective attributes will cause faster reaction than cognitive ones; (3) the reaction time difference between functional and representative or experiential attribute is significant, but there is no different between representative and experiential. In sum, we conclude that: (1): In word comprehension and weight judgment, we observed the affective primacy, even when the affect stimulus is presented as meaningful words. (2): The negative correlation between weight judgment and reaction time suggest us that the more important of attribute, the quicker of the reaction. (3): The difference on reaction time of functional, representative and experiential reflects the trend of emotional branding.
Resumo:
The researches of the CC's form processing mainly involved the effects of all kinds of form properties. In most of the cases, the researches were conducted after the lexical process completed. A few which was about early phases of visual perception focused on the process of feature extraction in character recognition. Up till now, nobody put forward a propose that we should study the form processing in the early phases of visual perception of CC. We hold that because the form processing occurs in the early phases of visual perception, we should study the processing prelexically. Moreover, visual perception of CC is a course during which the CC becomes clear gradually, so that the effects of all kinds of form properties should not be a absolute phenomena of an all-or-none. In this study we adopted 4 methods to research the form processing in the early phases simulatedly and systematically, including the tachistoscopic repetition, increasing time to present gradually, enlarging the visual angle gradually and non- tachistoscopic searching and naming. Under all kinds of bad or degraded visual conditions, the instantaneous course of early-phases processing was slowed down and postponed, and then the growth course was open to before our eyes. We can captured the characteristics of the form processing in the early phases by analyzing the reaction speed and recognition accuracy. Accompanying the visual angle and time increasing, the clarity improved and we can find out the relation between the effects of form properties and visual clarity improving. The results were as follows: ①in the early phases of visual perception of CC, there were the effects of all kinds of form properties. ②the quantity of the effects would cut down when the visual conditions were being changed better and better. We raised the concept of character's space transparency and it's algorithm to explain these effects of form properties. Furthermore, a model was discussed to help understand the phenomenon that the quantity of the effects changed as the visual conditions were improved. ③The early phases of visual perception of CC isn't the loci of the frequency effect.
Resumo:
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.
Resumo:
Traditionally, language speakers are categorised as mono-lingual, bilingual, or multilingual. It is traditionally assumed in English language education that the ‘lingual’ is something that can be ‘fixed’ in form, written down to be learnt, and taught. Accordingly, the ‘mono’-lingual will have a ‘fixed’ linguistic form. Such a ‘form’ differs according to a number of criteria or influences including region or ‘type’ of English (for example, World Englishes) but is nevertheless assumed to be a ‘form’. ‘Mono-lingualism’ is defined and believed, traditionally, to be ‘speaking one language’; wherever that language is; or whatever that language may be. In this chapter, grounded in an individual subjective philosophy of language, we question this traditional definition. Viewing language from the philosophical perspectives such as those of Bakhtin and Voloshinov, we argue that the prominence of ‘context’ and ‘consciousness’ in language means that to ‘fix’ the form of a language goes against the very spirit of how it is formed and used. We thus challenge the categorisation of ‘mono’-lingualism; proposing that such a categorisation is actually a category error, or a case ‘in which a property is ascribed to a thing that could not possibly have that property’ (Restivo, 2013, p. 175), in this case the property of ‘mono’. Using this proposition as a starting point, we suggest that more time be devoted to language in its context and as per its genuine use as a vehicle for consciousness. We theorise this can be done through a ‘literacy’ based approach which fronts the context of language use rather than the language itself. We outline how we envision this working for teachers, students and materials developers of English Language Education materials in a global setting. To do this we consider Scotland’s Curriculum for Excellence as an exemplar to promote conscious language use in context.
Resumo:
Tedd, L.A. & Large, A. (2005). Digital libraries: principles and practice in a global environment. Munich: K.G. Saur.
Resumo:
Jones, E. (2007). The Territory of Television: S4C and the Representation of the 'Whole of Wales.' In M. Cormack and N. Hourigan (Eds.), Minority Language Media: Concepts, Critiques and Case Studies (pp.188-211). No. 138. Bristol: Multilingual Matters. RAE2008
Resumo:
This paper shortly outlines the present status of English in Norway, principally in relation to the growing presence of English lexical borrowings in Norwegian. Some attention will also be devoted to the views held by Norwegian linguists towards the potential threat that the English language represents, particularly in domains where it is likely to supersede the Norwegian language.
Resumo:
http://ijl.oxfordjournals.org/cgi/reprint/ecp022?ijkey=FWAwWPvILuZDT1S&keytype=ref
Resumo:
English & Polish jokes based on linguistic ambiguity are constrasted. Linguistic ambiguity results from a multiplicity of semantic interpretations motivated by structural pattern. The meanings can be "translated" either by variations of the corresponding minimal strings or by specifying the type & extent of modification needed between the two interpretations. C. F. Hockett's (1972) translatability notion that a joke is linguistic if it cannot readily be translated into other languages without losing its humor is used to interpret some cross-linguistic jokes. It is claimed that additional intralinguistic criteria are needed to classify jokes. By using a syntactic representation, the humor can be explained & compared cross-linguistically. Since the mapping of semantic values onto lexical units is highly language specific, translatability is much less frequent with lexical ambiguity. Similarly, phonological jokes are not usually translatable. Pragmatic ambiguity can be translated on the basis of H. P. Grice's (1975) cooperative principle of conversation that calls for discourse interpretations. If the distinction between linguistic & nonlinguistic jokes is based on translatability, pragmatic jokes must be excluded from the classification. Because of their universality, pragmatic jokes should be included into the linguistic classification by going beyond the translatability criteria & using intralinguistic features to describe them.
Resumo:
Praca dotyczy wybranych metod pozyskiwania, czyli ekscerpcji, informacji o charakterze leksykalnym z elektronicznych zbiorów tekstów.Jej celem jest, po pierwsze, sformułowanie nowych, oryginalnych metod, które mogą być użyteczne w pozyskiwaniu materiału do analiz leksykalnych, a następnie zbadanie ich na wybranym zbiorze tekstów.Planowano opracowanie metod niewymagających zaawansowanej znajomości programowania komputerowego, a jednocześnie umożliwiających uzyskanie wartościowych wyników, gdzie za wartościowość metody uznaje się daną wydajność ekscerpcyjną. Trzy sformułowane metody dopracowano i zoptymalizowano.Metoda ekscerpcji jednostek nowych dostarczyła ponad 1000 wyrazów nowych, niezarejestrowanych, metoda ekscerpcji kolokacji w oparciu o akronimy daje ponad 6000 jednostek, zaś metoda ekscerpcji kolokacji wykorzystująca końcówkę liczby mnogiej dała ponad 110 tysięcy wyodrębnionych jednostek.
Resumo:
Wydział Neofilologii: Instytut Językoznawstwa
Resumo:
Wydział Anglistyki
Polskie nazwiska mieszkańców Drohobycza końca XVIII i początku XIX wieku na tle wschodniosłowiańskim
Resumo:
Wydział Neofilologii: Instytut Filologii Rosyjskiej. Zakład Ukrainistyki