974 resultados para Onomasiological Dictionary
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE
Resumo:
The present paper aims at applying a model of bilingual onomasiological terminological dictionary, as proposed by Babini (2001b), for the development of an English-Portuguese and Portuguese-English electronic dictionary of the fundamental Artificial Neural Networks (ANN) terms. This subarea of Artificial Intelligence was chosen due to its use in several technological activities. The onomasiological dictionary is characterized by allowing searches of either lexical or terminological units from its semantic content. Our dictionary model allows two types of search: semasiological and onomasiological. The onomasiological search is made possible by a set of semes or semantic traits that make up the concept of each term in the dictionary.
Resumo:
Nowadays, the lexicographical market has experienced an increase of the number of works produced. These works belong to two forms of macro-structural organization: the semasiological one and onomasiological one. This study aims at considering the qualities and advantages of onomasiological dictionaries, as well as defending that semasiology and onomasiology establish an intrinsic relationship that should be considered in the development of dictionaries. In order to demonstrate the benefits provided by a combination of onomasiological and semasiological courses in the same work, we describe how such courses were adopted in the Onomasiological Dictionary of Chromatic Phrases from Fauna and Flora.
Resumo:
This study applies theories of cognitive linguistics to the compilation of English learners’ dictionaries. Specifically, it employs the concepts of basic level categories and image schemas, two basic cognitive experiences, to examine the ‘definition proper’ of English dictionaries for foreign learners. In the study, the definition proper refers to the constituent part of a reference work that provides an explanation of the meanings of a word, phrase or term. This rationalization mainly consists of defining vocabulary, sense division and arrangement, as well as the means of defining (i.e. paraphrase, true definition, functional definition, and pictorial illustration). The aim of the study is to suggest ways of aligning the consultation and learning of definitions with dictionary users’ cognitive experiences. For this purpose, an analysis of the definition proper of the fourth edition of the Longman Dictionary of Contemporary English (LDOCE4) from the perspective of basic cognitive experiences has been undertaken. The study found that, generally, the lexicographic practices of LDOCE4 are consistent with theories of cognitive linguistics. However, there exist shortcomings that result from disregarding basic cognitive experiences.
Resumo:
Recent advances in computer vision and machine learning suggest that a wide range of problems can be addressed more appropriately by considering non-Euclidean geometry. In this paper we explore sparse dictionary learning over the space of linear subspaces, which form Riemannian structures known as Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into the space of symmetric matrices by an isometric mapping, which enables us to devise a closed-form solution for updating a Grassmann dictionary, atom by atom. Furthermore, to handle non-linearity in data, we propose a kernelised version of the dictionary learning algorithm. Experiments on several classification tasks (face recognition, action recognition, dynamic texture classification) show that the proposed approach achieves considerable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as kernelised Affine Hull Method and graph-embedding Grassmann discriminant analysis.
Resumo:
We report on a plan to establish a `Dictionary of LHC Signatures', an initiative that started at the WHEPP-X workshop in Chennai, January 2008. This study aims at the strategy of distinguishing 3 classes of dark matter motivated scenarios such as R-parity conserved supersymmetry, little Higgs models with T-parity conservation and universal extra dimensions with KK-parity for generic cases of their realization in a wide range of the model space. Discriminating signatures are tabulated and will need a further detailed analysis.
Resumo:
This paper presents an effective feature representation method in the context of activity recognition. Efficient and effective feature representation plays a crucial role not only in activity recognition, but also in a wide range of applications such as motion analysis, tracking, 3D scene understanding etc. In the context of activity recognition, local features are increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational requirements, their performance is still limited for real world applications due to a lack of contextual information and models not being tailored to specific activities. We propose a new activity representation framework to address the shortcomings of the popular, but simple bag-of-words approach. In our framework, first multiple instance SVM (mi-SVM) is used to identify positive features for each action category and the k-means algorithm is used to generate a codebook. Then locality-constrained linear coding is used to encode the features into the generated codebook, followed by spatio-temporal pyramid pooling to convey the spatio-temporal statistics. Finally, an SVM is used to classify the videos. Experiments carried out on two popular datasets with varying complexity demonstrate significant performance improvement over the base-line bag-of-feature method.
Resumo:
To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.
Resumo:
User authentication is essential for accessing computing resources, network resources, email accounts, online portals etc. To authenticate a user, system stores user credentials (user id and password pair) in system. It has been an interested field problem to discover user password from a system and similarly protecting them against any such possible attack. In this work we show that passwords are still vulnerable to hash chain based and efficient dictionary attacks. Human generated passwords use some identifiable patterns. We have analysed a sample of 19 million passwords, of different lengths, available online and studied the distribution of the symbols in the password strings. We show that the distribution of symbols in user passwords is affected by the native language of the user. From symbol distributions we can build smart and efficient dictionaries, which are smaller in size and their coverage of plausible passwords from Key-space is large. These smart dictionaries make dictionary based attacks practical.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.