617 resultados para Dictionaries.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the world’s languages lack electronic word form dictionaries. The linguists who gather such dictionaries could be helped with an efficient morphology workbench that adapts to different environments and uses. A widely usable workbench could be characterized, ideally, as generally applicable, extensible, and freely available (GEA). It seems that such a solution could be implemented in the framework of finite-state methods. The current work defines the GEA desiderata and starts a series of articles concerning these desiderata in finite- state morphology. Subsequent parts will review the state of the art and present an action plan toward creating a widely usable finite-state morphology workbench.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objects of the investigation were the syntactic functions of adjectives. The reason for the interest in these functions are the different modes of use, in which an adjective can occur. All together an adjective can take three different modes of use: attributive (e. g. a fast car), predicative (e. g. the car is fast) and adverbial (e. g. the car drives fast). Since an adjective cannot always take every function, some dictionaries (esp. learner s dictionaries) deliver information within the lexical entry about any restrictions. The purpose of the research consisted of a comparison in relation to the lexical entries of adjectives, which were investigated within four selected monolingual German-speaking dictionaries. The comparison of the syntactical data of adjectives were done to work out the differences and the common characteristics of the lexical entries concerning the different modes of use and to analyse respective to assess them. In the foreground, however, were the differences of the syntactical information. Concerning those differences it had to be worked out, which entry is the grammatically right one respective if one entry is in fact wrong. To find that out an empirical analysis was needed, which based on the question in which way an adjective is used within a context as far as there are no conforming data within the dictionaries. The delivery of the correctness and the homogeneity of lexical entries of German-speaking dictionaries are very important to support people who are learning the German language and to ensure the user friendliness of dictionaries. Throughout the investigations it became clear that in almost half of the cases (over 40 %) syntactical information of adjectives differ from each other within the dictionaries. These differences make it for non-native speakers of course very difficult to understand the correct usage of an adjective. Thus the main aim of the doctoral thesis was it to deliver and to demonstrate the clear syntactical usage of a certain amount of adjectives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

592 s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Language Documentation and Description as Language Planning Working with Three Signed Minority Languages Sign languages are minority languages that typically have a low status in society. Language planning has traditionally been controlled from outside the sign-language community. Even though signed languages lack a written form, dictionaries have played an important role in language description and as tools in foreign language learning. The background to the present study on sign language documentation and description as language planning is empirical research in three dictionary projects in Finland-Swedish Sign Language, Albanian Sign Language, and Kosovar Sign Language. The study consists of an introductory article and five detailed studies which address language planning from different perspectives. The theoretical basis of the study is sociocultural linguistics. The research methods used were participant observation, interviews, focus group discussions, and document analysis. The primary research questions are the following: (1) What is the role of dictionary and lexicographic work in language planning, in research on undocumented signed language, and in relation to the language community as such? (2) What factors are particular challenges in the documentation of a sign language and should therefore be given special attention during lexicographic work? (3) Is a conventional dictionary a valid tool for describing an undocumented sign language? The results indicate that lexicographic work has a central part to play in language documentation, both as part of basic research on undocumented sign languages and for status planning. Existing dictionary work has contributed new knowledge about the languages and the language communities. The lexicographic work adds to the linguistic advocacy work done by the community itself with the aim of vitalizing the language, empowering the community, receiving governmental recognition for the language, and improving the linguistic (human) rights of the language users. The history of signed languages as low status languages has consequences for language planning and lexicography. One challenge that the study discusses is the relationship between the sign-language community and the hearing sign linguist. In order to make it possible for the community itself to take the lead in a language planning process, raising linguistic awareness within the community is crucial. The results give rise to questions of whether lexicographic work is of more importance for status planning than for corpus planning. A conventional dictionary as a tool for describing an undocumented sign language is criticised. The study discusses differences between signed and spoken/written languages that are challenging for lexicographic presentations. Alternative electronic lexicographic approaches including both lexicon and grammar are also discussed. Keywords: sign language, Finland-Swedish Sign Language, Albanian Sign Language, Kosovar Sign Language, language documentation and description, language planning, lexicography

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the preliminary analysis of Kannada WordNet and the set of relevant computational tools. Although the design has been inspired by the famous English WordNet, and to certain extent, by the Hindi WordNet, the unique features of Kannada WordNet are graded antonyms and meronymy relationships, nominal as well as verbal compoundings, complex verb constructions and efficient underlying database design (designed to handle storage and display of Kannada unicode characters). Kannada WordNet would not only add to the sparse collection of machine-readable Kannada dictionaries, but also will give new insights into the Kannada vocabulary. It provides sufficient interface for applications involved in Kannada machine translation, spell checker and semantic analyser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time object tracking is a critical task in many computer vision applications. Achieving rapid and robust tracking while handling changes in object pose and size, varying illumination and partial occlusion, is a challenging task given the limited amount of computational resources. In this paper we propose a real-time object tracker in l(1) framework addressing these issues. In the proposed approach, dictionaries containing templates of overlapping object fragments are created. The candidate fragments are sparsely represented in the dictionary fragment space by solving the l(1) regularized least squares problem. The non zero coefficients indicate the relative motion between the target and candidate fragments along with a fidelity measure. The final object motion is obtained by fusing the reliable motion information. The dictionary is updated based on the object likelihood map. The proposed tracking algorithm is tested on various challenging videos and found to outperform earlier approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scatter/Gather systems are increasingly becoming useful in browsing document corpora. Usability of the present-day systems are restricted to monolingual corpora, and their methods for clustering and labeling do not easily extend to the multilingual setting, especially in the absence of dictionaries/machine translation. In this paper, we study the cluster labeling problem for multilingual corpora in the absence of machine translation, but using comparable corpora. Using a variational approach, we show that multilingual topic models can effectively handle the cluster labeling problem, which in turn allows us to design a novel Scatter/Gather system ShoBha. Experimental results on three datasets, namely the Canadian Hansards corpus, the entire overlapping Wikipedia of English, Hindi and Bengali articles, and a trilingual news corpus containing 41,000 articles, confirm the utility of the proposed system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tight fusion frames which form optimal packings in Grassmannian manifolds are of interest in signal processing and communication applications. In this paper, we study optimal packings and fusion frames having a specific structure for use in block sparse recovery problems. The paper starts with a sufficient condition for a set of subspaces to be an optimal packing. Further, a method of using optimal Grassmannian frames to construct tight fusion frames which form optimal packings is given. Then, we derive a lower bound on the block coherence of dictionaries used in block sparse recovery. From this result, we conclude that the Grassmannian fusion frames considered in this paper are optimal from the block coherence point of view. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

User authentication is essential for accessing computing resources, network resources, email accounts, online portals etc. To authenticate a user, system stores user credentials (user id and password pair) in system. It has been an interested field problem to discover user password from a system and similarly protecting them against any such possible attack. In this work we show that passwords are still vulnerable to hash chain based and efficient dictionary attacks. Human generated passwords use some identifiable patterns. We have analysed a sample of 19 million passwords, of different lengths, available online and studied the distribution of the symbols in the password strings. We show that the distribution of symbols in user passwords is affected by the native language of the user. From symbol distributions we can build smart and efficient dictionaries, which are smaller in size and their coverage of plausible passwords from Key-space is large. These smart dictionaries make dictionary based attacks practical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fingerprints are used for identification in forensics and are classified into Manual and Automatic. Automatic fingerprint identification system is classified into Latent and Exemplar. A novel Exemplar technique of Fingerprint Image Verification using Dictionary Learning (FIVDL) is proposed to improve the performance of low quality fingerprints, where Dictionary learning method reduces the time complexity by using block processing instead of pixel processing. The dynamic range of an image is adjusted by using Successive Mean Quantization Transform (SMQT) technique and the frequency domain noise is reduced using spectral frequency Histogram Equalization. Then, an adaptive nonlinear dynamic range adjustment technique is utilized to determine the local spectral features on corresponding fingerprint ridge frequency and orientation. The dictionary is constructed using spatial fundamental frequency that is determined from the spectral features. These dictionaries help in removing the spurious noise present in fingerprints and reduce the time complexity by using block processing instead of pixel processing. Further, dictionaries are used to reconstruct the image for matching. The proposed FIVDL is verified on FVC database sets and Experimental result shows an improvement over the state-of-the-art techniques. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the development of the 2003 CU-HTK large vocabulary speech recognition system for Conversational Telephone Speech (CTS). The system was designed based on a multi-pass, multi-branch structure where the output of all branches is combined using system combination. A number of advanced modelling techniques such as Speaker Adaptive Training, Heteroscedastic Linear Discriminant Analysis, Minimum Phone Error estimation and specially constructed Single Pronunciation dictionaries were employed. The effectiveness of each of these techniques and their potential contribution to the result of system combination was evaluated in the framework of a state-of-the-art LVCSR system with sophisticated adaptation. The final 2003 CU-HTK CTS system constructed from some of these models is described and its performance on the DARPA/NIST 2003 Rich Transcription (RT-03) evaluation test set is discussed.