618 resultados para Paronymy Dictionaries
Resumo:
Title of the Master's thesis: Análisis de la preposición hacia y establecimiento de sus equivalentes en finés (trans. Analysis of the Spanish preposition hacia and the finding of its equivalents in Finnish) Abstracts: The aim of this Master thesis is to provide a detailed analysis of the Spanish preposition hacia from a cognitive perspective and to establish its equivalents in Finnish language. In this sense, my purpose is to demonstrate the suitability of both cognitive perspectives and Contrastive Linguistics for semantic analysis. This thesis is divided into five chapters. The first chapter includes a presentation and a critical review of the monolingual lexical processing and semantic analysis of the Spanish preposition hacia in major reference works. Through this chapter it is possible to see both the inadequacies and omissions that are present in all the given definitions. In this sense, this chapter shows that these problems are not but the upper stage of an ontological (and therefore methodological) problem in the treatment of prepositions. The second chapter covers the presentation of the methodological and theoretical perspective adopted for this thesis for the monolingual analysis and definition of the Spanish preposition hacia, following mainly the guidelines established by G. Lakoff (1987) and R. Langacker (2008) in his Cognitive grammar. Taken together, and within the same paradigm, recent analytical and methodological contributions are discussed critically for the treatment of polysemy in language (cf. Tyler ja Evans 2003). In the third chapter, and in accordance with the requirements regarding the use of empirical data from corpora, is my aim to set out a monolingual original analysis of the Spanish preposition hacia in observance of the principles and the methodology spelled out in the second chapter. The main objective of this chapter is to build a full fledged semantic representation of the polysemy of this preposition in order to understand and articulate its meanings with Finnish language (and other possible languages). The fourth chapter, in accordance with the results of chapter 3, examines and describes and establishes the corresponding equivalents in Finnish for this preposition. The results obtained in this chapter are also contrasted with the current bilingual lexicographical definitions found in the most important dictionaries and grammars. Finally, in the fifth chapter of this thesis, the results of this work are discussed critically. In this way, some observations are given regarding both the ontological and theoretical assumptions as well regarding the methodological perspective adopted. I also present some notes for the construction of a general methodology for the semantic analysis of Spanish prepositions to be carried out in further investigations. El objetivo de este trabajo, que caracterizamos como una tarea de carácter comparativo-analítico, es brindar un análisis detallado de la preposición castellana hacia desde una perspectiva cognitiva en tanto y a través del establecimiento de sus equivalentes en finés. Se procura, de esta forma, demostrar la adecuación de una perspectiva cognitiva tanto para el examen como para el establecimiento y articulación de la serie de equivalentes que una partícula, en nuestro caso una preposición, encuentra en otra lengua. De esta forma, y frente a definiciones canónicas que advierten sobre la imposibilidad de una caracterización acabada del conjunto de usos de una preposición, se observa como posible, a través de la aplicación de una metodología teórica-analítica adecuada, la construcción de una definición viable tanto en un nivel jerárquico como descriptivo. La presente tesis se encuentra dividida en cinco capítulos. El primer capítulo comprende una exposición y revisión critica del tratamiento monolingüe lexicográfico y analítico que la preposición hacia ha recibido en las principales obras de referencia, donde se observa que las inadecuaciones y omisiones presentes en la totalidad de las definiciones analizadas representan tan sólo el estadio superior de una problemática de carácter ontológico y, por tanto, metodológico, en el tratamiento de las preposiciones. El capítulo segundo comprende la presentación de la perspectiva teórica metodológica adoptada en esta tesis para el análisis y definición monolingüe de la preposición hacia, teniendo por líneas directrices las propuestas realizadas por G. Lakoff , así como a los fundamentos establecidos por R. Langacker en su propuesta cognitiva para una nueva gramática. En forma conjunta y complementaria, y dentro del mismo paradigma, empleamos, discutimos críticamente y desarrollamos diferentes aportes analítico-metodológicos para el tratamiento de la polisemia en unidades lingüísticas locativas. En el capítulo tercero, y en acuerdo con las exigencias respecto a la utilización de datos empíricos obtenidos a partir de corpus textuales, se expone un análisis original monolingüe de la preposición hacia en observancia de los principios y la metodología explicitada en el capítulo segundo, teniendo por principal objetivo la construcción de una representación semántica de la polisemia de la preposición que comprenda y articule los sentidos prototípicos para ésta especificados. El capítulo cuarto, y en acuerdo con los resultados de nuestro análisis monolingual de la preposición, se examinan, describen y establecen los equivalentes correspondientes en finés para hacia; asimismo, se contrastan en este capítulo los resultados obtenidos con las definiciones lexicográficas bilingües vigentes. Se recogen en el último y quinto capítulo de esta tesis algunas observaciones tanto respecto a los postulados ontológicos y teórico-metodológicos de la perspectiva adoptada, así como algunas notas para la construcción de una metodología general para el análisis semántico preposicional.
Resumo:
Circular, naming the committee members for the promotion of the Jewish Encyclopedia; undated (ca. 1928)
Resumo:
Most of the world’s languages lack electronic word form dictionaries. The linguists who gather such dictionaries could be helped with an efficient morphology workbench that adapts to different environments and uses. A widely usable workbench could be characterized, ideally, as generally applicable, extensible, and freely available (GEA). It seems that such a solution could be implemented in the framework of finite-state methods. The current work defines the GEA desiderata and starts a series of articles concerning these desiderata in finite- state morphology. Subsequent parts will review the state of the art and present an action plan toward creating a widely usable finite-state morphology workbench.
Resumo:
The main objects of the investigation were the syntactic functions of adjectives. The reason for the interest in these functions are the different modes of use, in which an adjective can occur. All together an adjective can take three different modes of use: attributive (e. g. a fast car), predicative (e. g. the car is fast) and adverbial (e. g. the car drives fast). Since an adjective cannot always take every function, some dictionaries (esp. learner s dictionaries) deliver information within the lexical entry about any restrictions. The purpose of the research consisted of a comparison in relation to the lexical entries of adjectives, which were investigated within four selected monolingual German-speaking dictionaries. The comparison of the syntactical data of adjectives were done to work out the differences and the common characteristics of the lexical entries concerning the different modes of use and to analyse respective to assess them. In the foreground, however, were the differences of the syntactical information. Concerning those differences it had to be worked out, which entry is the grammatically right one respective if one entry is in fact wrong. To find that out an empirical analysis was needed, which based on the question in which way an adjective is used within a context as far as there are no conforming data within the dictionaries. The delivery of the correctness and the homogeneity of lexical entries of German-speaking dictionaries are very important to support people who are learning the German language and to ensure the user friendliness of dictionaries. Throughout the investigations it became clear that in almost half of the cases (over 40 %) syntactical information of adjectives differ from each other within the dictionaries. These differences make it for non-native speakers of course very difficult to understand the correct usage of an adjective. Thus the main aim of the doctoral thesis was it to deliver and to demonstrate the clear syntactical usage of a certain amount of adjectives.
Resumo:
592 s.
Resumo:
Language Documentation and Description as Language Planning Working with Three Signed Minority Languages Sign languages are minority languages that typically have a low status in society. Language planning has traditionally been controlled from outside the sign-language community. Even though signed languages lack a written form, dictionaries have played an important role in language description and as tools in foreign language learning. The background to the present study on sign language documentation and description as language planning is empirical research in three dictionary projects in Finland-Swedish Sign Language, Albanian Sign Language, and Kosovar Sign Language. The study consists of an introductory article and five detailed studies which address language planning from different perspectives. The theoretical basis of the study is sociocultural linguistics. The research methods used were participant observation, interviews, focus group discussions, and document analysis. The primary research questions are the following: (1) What is the role of dictionary and lexicographic work in language planning, in research on undocumented signed language, and in relation to the language community as such? (2) What factors are particular challenges in the documentation of a sign language and should therefore be given special attention during lexicographic work? (3) Is a conventional dictionary a valid tool for describing an undocumented sign language? The results indicate that lexicographic work has a central part to play in language documentation, both as part of basic research on undocumented sign languages and for status planning. Existing dictionary work has contributed new knowledge about the languages and the language communities. The lexicographic work adds to the linguistic advocacy work done by the community itself with the aim of vitalizing the language, empowering the community, receiving governmental recognition for the language, and improving the linguistic (human) rights of the language users. The history of signed languages as low status languages has consequences for language planning and lexicography. One challenge that the study discusses is the relationship between the sign-language community and the hearing sign linguist. In order to make it possible for the community itself to take the lead in a language planning process, raising linguistic awareness within the community is crucial. The results give rise to questions of whether lexicographic work is of more importance for status planning than for corpus planning. A conventional dictionary as a tool for describing an undocumented sign language is criticised. The study discusses differences between signed and spoken/written languages that are challenging for lexicographic presentations. Alternative electronic lexicographic approaches including both lexicon and grammar are also discussed. Keywords: sign language, Finland-Swedish Sign Language, Albanian Sign Language, Kosovar Sign Language, language documentation and description, language planning, lexicography
Resumo:
This paper presents the preliminary analysis of Kannada WordNet and the set of relevant computational tools. Although the design has been inspired by the famous English WordNet, and to certain extent, by the Hindi WordNet, the unique features of Kannada WordNet are graded antonyms and meronymy relationships, nominal as well as verbal compoundings, complex verb constructions and efficient underlying database design (designed to handle storage and display of Kannada unicode characters). Kannada WordNet would not only add to the sparse collection of machine-readable Kannada dictionaries, but also will give new insights into the Kannada vocabulary. It provides sufficient interface for applications involved in Kannada machine translation, spell checker and semantic analyser.
Resumo:
Real-time object tracking is a critical task in many computer vision applications. Achieving rapid and robust tracking while handling changes in object pose and size, varying illumination and partial occlusion, is a challenging task given the limited amount of computational resources. In this paper we propose a real-time object tracker in l(1) framework addressing these issues. In the proposed approach, dictionaries containing templates of overlapping object fragments are created. The candidate fragments are sparsely represented in the dictionary fragment space by solving the l(1) regularized least squares problem. The non zero coefficients indicate the relative motion between the target and candidate fragments along with a fidelity measure. The final object motion is obtained by fusing the reliable motion information. The dictionary is updated based on the object likelihood map. The proposed tracking algorithm is tested on various challenging videos and found to outperform earlier approach.
Resumo:
Scatter/Gather systems are increasingly becoming useful in browsing document corpora. Usability of the present-day systems are restricted to monolingual corpora, and their methods for clustering and labeling do not easily extend to the multilingual setting, especially in the absence of dictionaries/machine translation. In this paper, we study the cluster labeling problem for multilingual corpora in the absence of machine translation, but using comparable corpora. Using a variational approach, we show that multilingual topic models can effectively handle the cluster labeling problem, which in turn allows us to design a novel Scatter/Gather system ShoBha. Experimental results on three datasets, namely the Canadian Hansards corpus, the entire overlapping Wikipedia of English, Hindi and Bengali articles, and a trilingual news corpus containing 41,000 articles, confirm the utility of the proposed system.
Resumo:
Tight fusion frames which form optimal packings in Grassmannian manifolds are of interest in signal processing and communication applications. In this paper, we study optimal packings and fusion frames having a specific structure for use in block sparse recovery problems. The paper starts with a sufficient condition for a set of subspaces to be an optimal packing. Further, a method of using optimal Grassmannian frames to construct tight fusion frames which form optimal packings is given. Then, we derive a lower bound on the block coherence of dictionaries used in block sparse recovery. From this result, we conclude that the Grassmannian fusion frames considered in this paper are optimal from the block coherence point of view. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.
Resumo:
User authentication is essential for accessing computing resources, network resources, email accounts, online portals etc. To authenticate a user, system stores user credentials (user id and password pair) in system. It has been an interested field problem to discover user password from a system and similarly protecting them against any such possible attack. In this work we show that passwords are still vulnerable to hash chain based and efficient dictionary attacks. Human generated passwords use some identifiable patterns. We have analysed a sample of 19 million passwords, of different lengths, available online and studied the distribution of the symbols in the password strings. We show that the distribution of symbols in user passwords is affected by the native language of the user. From symbol distributions we can build smart and efficient dictionaries, which are smaller in size and their coverage of plausible passwords from Key-space is large. These smart dictionaries make dictionary based attacks practical.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.