978 resultados para Bilingual Dictionary
Resumo:
The impact of Greek-Egyptian bilingualism on language use and linguistic competence is the key issue in this dissertation. The language use in a corpus of 148 Greek notarial contracts is analyzed on phonological, morphological and syntactic levels. The texts were written by bilingual notaries (agoranomoi) in Upper Egypt in the later Hellenistic period. They present, for the most part, very good administrative Greek. On the other hand, their language contains variation and idiosyncrasies that were earlier condemned as ungrammatical and bad Greek, and were not subjected to closer analysis. In order to reach plausible explanations for those phenomena, a thorough research into the sociohistorical and linguistic context was needed before the linguistic analysis. The general linguistic landscape, the population pattern and the status and frequency of Greek literacy in Ptolemaic Egypt in general, and in Upper Egypt in particular, are presented. Through a detailed examination of the notaries themselves (their names, families and handwriting), it became evident that there were one to three persons at the notarial office writing under the signature of one notary. Often the documents under one notary's name were written in the same hand. We get, therefore, exceptionally close to studying idiolects in written material from antiquity. The qualitative linguistic analysis revealed that the notaries made relatively few orthographic mistakes that reflect the ongoing phonological changes and they mastered the morphological forms. The problems arose at the syntactic level, for example, with the pattern of agreement between the noun groups or a noun with its modifiers. The significant structural differences between Greek and Egyptian can be behind the innovative strategies used by some of the notaries. Moreover, certain syntactic structures were clearly transferred from the notaries first language, Egyptian. This is obvious in the relative clause structure. Transfer can be found in other structures, as well, although, we must not forget the influence of parallel Greek structures. Sometimes these can act simultaneously. The interesting linguistic strategies and transfer features come mostly from the hand of one notary, Hermias. Some other notaries show similar patterns, for example, Hermias' cousin, Ammonios. Hermias' texts reveal that he probably spoke Greek more than his predecessors. It is possible to conclude, then, that the notaries of the later generations were more fluently bilingual; their two languages were partly integrated in their minds as an interlanguage combining elements from both languages. The earlier notaries had the two languages functionally separated and they followed the standardized contract formulae more rigidly.
Resumo:
FinnWordNet is a wordnet for Finnish that complies with the format of the Princeton WordNet (PWN) (Fellbaum, 1998). It was built by translating the PrincetonWordNet 3.0 synsets into Finnish by human translators. It is open source and contains 117000 synsets. The Finnish translations were inserted into the PWN structure resulting in a bilingual lexical database. In natural language processing (NLP), wordnets have been used for infusing computers with semantic knowledge assuming that humans already have a sufficient amount of this knowledge. In this paper we present a case study of using wordnets as an electronic dictionary. We tested whether native Finnish speakers benefit from using a wordnet while completing English sentence completion tasks. We found that using either an English wordnet or a bilingual English Finnish wordnet significantly improves performance in the task. This should be taken into account when setting standards and comparing human and computer performance on these tasks.
Resumo:
Feature extraction in bilingual OCR is handicapped by the increase in the number of classes or characters to be handled. This is evident in the case of Indian languages whose alphabet set is large. It is expected that the complexity of the feature extraction process increases with the number of classes. Though the determination of the best set of features that could be used cannot be ascertained through any quantitative measures, the characteristics of the scripts can help decide on the feature extraction procedure. This paper describes a hierarchical feature extraction scheme for recognition of printed bilingual (Tamil and Roman) text. The scheme divides the combined alphabet set of both the scripts into subsets by the extraction of certain spatial and structural features. Three features viz geometric moments, DCT based features and Wavelet transform based features are extracted from the grouped symbols and a linear transformation is performed on them for the purpose of efficient representation in the feature space. The transformation is obtained by the maximization of certain criterion functions. Three techniques : Principal component analysis, maximization of Fisher's ratio and maximization of divergence measure have been employed to estimate the transformation matrix. It has been observed that the proposed hierarchical scheme allows for easier handling of the alphabets and there is an appreciable rise in the recognition accuracy as a result of the transformations.
Resumo:
To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.
Resumo:
User authentication is essential for accessing computing resources, network resources, email accounts, online portals etc. To authenticate a user, system stores user credentials (user id and password pair) in system. It has been an interested field problem to discover user password from a system and similarly protecting them against any such possible attack. In this work we show that passwords are still vulnerable to hash chain based and efficient dictionary attacks. Human generated passwords use some identifiable patterns. We have analysed a sample of 19 million passwords, of different lengths, available online and studied the distribution of the symbols in the password strings. We show that the distribution of symbols in user passwords is affected by the native language of the user. From symbol distributions we can build smart and efficient dictionaries, which are smaller in size and their coverage of plausible passwords from Key-space is large. These smart dictionaries make dictionary based attacks practical.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
Oversmoothing of speech parameter trajectories is one of the causes for quality degradation of HMM-based speech synthesis. Various methods have been proposed to overcome this effect, the most recent ones being global variance (GV) and modulation-spectrum-based post-filter (MSPF). However, there is still a significant quality gap between natural and synthesized speech. In this paper, we propose a two-fold post-filtering technique to alleviate to a certain extent the oversmoothing of spectral and excitation parameter trajectories of HMM-based speech synthesis. For the spectral parameters, we propose a sparse coding-based post-filter to match the trajectories of synthetic speech to that of natural speech, and for the excitation trajectory, we introduce a perceptually motivated post-filter. Experimental evaluations show quality improvement compared with existing methods.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Human detection is a complex problem owing to the variable pose that they can adopt. Here, we address this problem in sparse representation framework with an overcomplete scale-embedded dictionary. Histogram of oriented gradient features extracted from the candidate image patches are sparsely represented by the dictionary that contain positive bases along with negative and trivial bases. The object is detected based on the proposed likelihood measure obtained from the distribution of these sparse coefficients. The likelihood is obtained as the ratio of contribution of positive bases to negative and trivial bases. The positive bases of the dictionary represent the object (human) at various scales. This enables us to detect the object at any scale in one shot and avoids multiple scanning at different scales. This significantly reduces the computational complexity of detection task. In addition to human detection, it also finds the scale at which the human is detected due to the scale-embedded structure of the dictionary.
Resumo:
We develop a new dictionary learning algorithm called the l(1)-K-svp, by minimizing the l(1) distortion on the data term. The proposed formulation corresponds to maximum a posteriori estimation assuming a Laplacian prior on the coefficient matrix and additive noise, and is, in general, robust to non-Gaussian noise. The l(1) distortion is minimized by employing the iteratively reweighted least-squares algorithm. The dictionary atoms and the corresponding sparse coefficients are simultaneously estimated in the dictionary update step. Experimental results show that l(1)-K-SVD results in noise-robustness, faster convergence, and higher atom recovery rate than the method of optimal directions, K-SVD, and the robust dictionary learning algorithm (RDL), in Gaussian as well as non-Gaussian noise. For a fixed value of sparsity, number of dictionary atoms, and data dimension, l(1)-K-SVD outperforms K-SVD and RDL on small training sets. We also consider the generalized l(p), 0 < p < 1, data metric to tackle heavy-tailed/impulsive noise. In an image denoising application, l(1)-K-SVD was found to result in higher peak signal-to-noise ratio (PSNR) over K-SVD for Laplacian noise. The structural similarity index increases by 0.1 for low input PSNR, which is significant and demonstrates the efficacy of the proposed method. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Fingerprints are used for identification in forensics and are classified into Manual and Automatic. Automatic fingerprint identification system is classified into Latent and Exemplar. A novel Exemplar technique of Fingerprint Image Verification using Dictionary Learning (FIVDL) is proposed to improve the performance of low quality fingerprints, where Dictionary learning method reduces the time complexity by using block processing instead of pixel processing. The dynamic range of an image is adjusted by using Successive Mean Quantization Transform (SMQT) technique and the frequency domain noise is reduced using spectral frequency Histogram Equalization. Then, an adaptive nonlinear dynamic range adjustment technique is utilized to determine the local spectral features on corresponding fingerprint ridge frequency and orientation. The dictionary is constructed using spatial fundamental frequency that is determined from the spectral features. These dictionaries help in removing the spurious noise present in fingerprints and reduce the time complexity by using block processing instead of pixel processing. Further, dictionaries are used to reconstruct the image for matching. The proposed FIVDL is verified on FVC database sets and Experimental result shows an improvement over the state-of-the-art techniques. (C) 2015 The Authors. Published by Elsevier B.V.