141 resultados para Historical Document Recognition

em Indian Institute of Science - Bangalore - Índia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we report a breakthrough result on the difficult task of segmentation and recognition of coloured text from the word image dataset of ICDAR robust reading competition challenge 2: reading text in scene images. We split the word image into individual colour, gray and lightness planes and enhance the contrast of each of these planes independently by a power-law transform. The discrimination factor of each plane is computed as the maximum between-class variance used in Otsu thresholding. The plane that has maximum discrimination factor is selected for segmentation. The trial version of Omnipage OCR is then used on the binarized words for recognition. Our recognition results on ICDAR 2011 and ICDAR 2003 word datasets are compared with those reported in the literature. As baseline, the images binarized by simple global and local thresholding techniques were also recognized. The word recognition rate obtained by our non-linear enhancement and selection of plance method is 72.8% and 66.2% for ICDAR 2011 and 2003 word datasets, respectively. We have created ground-truth for each image at the pixel level to benchmark these datasets using a toolkit developed by us. The recognition rate of benchmarked images is 86.7% and 83.9% for ICDAR 2011 and 2003 datasets, respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The document images that are fed into an Optical Character Recognition system, might be skewed. This could be due to improper feeding of the document into the scanner or may be due to a faulty scanner. In this paper, we propose a skew detection and correction method for document images. We make use of the inherent randomness in the Horizontal Projection profiles of a text block image, as the skew of the image varies. The proposed algorithm has proved to be very robust and time efficient. The entire process takes less than a second on a 2.4 GHz Pentium IV PC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we describe a system for the automatic recognition of isolated handwritten Devanagari characters obtained by linearizing consonant conjuncts. Owing to the large number of characters and resulting demands on data acquisition, we use structural recognition techniques to reduce some characters to others. The residual characters are then classified using the subspace method. Finally the results of structural recognition and feature-based matching are mapped to give final output. The proposed system Ifs evaluated for the writer dependent scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Separation of printed text blocks from the non-text areas, containing signatures, handwritten text, logos and other such symbols, is a necessary first step for an OCR involving printed text recognition. In the present work, we compare the efficacy of some feature-classifier combinations to carry out this separation task. We have selected length-nomalized horizontal projection profile (HPP) as the starting point of such a separation task. This is with the assumption that the printed text blocks contain lines of text which generate HPP's with some regularity. Such an assumption is demonstrated to be valid. Our features are the HPP and its two transformed versions, namely, eigen and Fisher profiles. Four well known classifiers, namely, Nearest neighbor, Linear discriminant function, SVM's and artificial neural networks have been considered and efficiency of the combination of these classifiers with the above features is compared. A sequential floating feature selection technique has been adopted to enhance the efficiency of this separation task. The results give an average accuracy of about 96.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design of a full fledged OCR system for printed Kannada text. The machine recognition of Kannada characters is difficult due to similarity in the shapes of different characters, script complexity and non-uniqueness in the representation of diacritics. The document image is subject to line segmentation, word segmentation and zone detection. From the zonal information, base characters, vowel modifiers and consonant conjucts are separated. Knowledge based approach is employed for recognizing the base characters. Various features are employed for recognising the characters. These include the coefficients of the Discrete Cosine Transform, Discrete Wavelet Transform and Karhunen-Louve Transform. These features are fed to different classifiers. Structural features are used in the subsequent levels to discriminate confused characters. Use of structural features, increases recognition rate from 93% to 98%. Apart from the classical pattern classification technique of nearest neighbour, Artificial Neural Network (ANN) based classifiers like Back Propogation and Radial Basis Function (RBF) Networks have also been studied. The ANN classifiers are trained in supervised mode using the transform features. Highest recognition rate of 99% is obtained with RBF using second level approximation coefficients of Haar wavelets as the features on presegmented base characters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a fractal coding method to recognize online handwritten Tamil characters and propose a novel technique to increase the efficiency in terms of time while coding and decoding. This technique exploits the redundancy in data, thereby achieving better compression and usage of lesser memory. It also reduces the encoding time and causes little distortion during reconstruction. Experiments have been conducted to use these fractal codes to classify the online handwritten Tamil characters from the IWFHR 2006 competition dataset. In one approach, we use fractal coding and decoding process. A recognition accuracy of 90% has been achieved by using DTW for distortion evaluation during classification and encoding processes as compared to 78% using nearest neighbor classifier. In other experiments, we use the fractal code, fractal dimensions and features derived from fractal codes as features in separate classifiers. While the fractal code is successful as a feature, the other two features are not able to capture the wide within-class variations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study different methods for prototype selection for recognizing handwritten characters of Tamil script. In the first method, cumulative pairwise- distances of the training samples of a given class are used to select prototypes. In the second method, cumulative distance to allographs of different orientation is used as a criterion to decide if the sample is representative of the group. The latter method is presumed to offset the possible orientation effect. This method still uses fixed number of prototypes for each of the classes. Finally, a prototype set growing algorithm is proposed, with a view to better model the differences in complexity of different character classes. The proposed algorithms are tested and compared for both writer independent and writer adaptation scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we describe a method for feature extraction and classification of characters manually isolated from scene or natural images. Characters in a scene image may be affected by low resolution, uneven illumination or occlusion. We propose a novel method to perform binarization on gray scale images by minimizing energy functional. Discrete Cosine Transform and Angular Radial Transform are used to extract the features from characters after normalization for scale and translation. We have evaluated our method on the complete test set of Chars74k dataset for English and Kannada scripts consisting of handwritten and synthesized characters, as well as characters extracted from camera captured images. We utilize only synthesized and handwritten characters from this dataset as training set. Nearest neighbor classification is used in our experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

N-gram language models and lexicon-based word-recognition are popular methods in the literature to improve recognition accuracies of online and offline handwritten data. However, there are very few works that deal with application of these techniques on online Tamil handwritten data. In this paper, we explore methods of developing symbol-level language models and a lexicon from a large Tamil text corpus and their application to improving symbol and word recognition accuracies. On a test database of around 2000 words, we find that bigram language models improve symbol (3%) and word recognition (8%) accuracies and while lexicon methods offer much greater improvements (30%) in terms of word recognition, there is a large dependency on choosing the right lexicon. For comparison to lexicon and language model based methods, we have also explored re-evaluation techniques which involve the use of expert classifiers to improve symbol and word recognition accuracies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have benchmarked the maximum obtainable recognition accuracy on five publicly available standard word image data sets using semi-automated segmentation and a commercial OCR. These images have been cropped from camera captured scene images, born digital images (BDI) and street view images. Using the Matlab based tool developed by us, we have annotated at the pixel level more than 3600 word images from the five data sets. The word images binarized by the tool, as well as by our own midline analysis and propagation of segmentation (MAPS) algorithm are recognized using the trial version of Nuance Omnipage OCR and these two results are compared with the best reported in the literature. The benchmark word recognition rates obtained on ICDAR 2003, Sign evaluation, Street view, Born-digital and ICDAR 2011 data sets are 83.9%, 89.3%, 79.6%, 88.5% and 86.7%, respectively. The results obtained from MAPS binarized word images without the use of any lexicon are 64.5% and 71.7% for ICDAR 2003 and 2011 respectively, and these values are higher than the best reported values in the literature of 61.1% and 41.2%, respectively. MAPS results of 82.8% for BDI 2011 dataset matches the performance of the state of the art method based on power law transform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Resilience-based approaches are increasingly being called upon to inform ecosystem management, particularly in arid and semi-arid regions. This requires management frameworks that can assess ecosystem dynamics, both within and between alternative states, at relevant time scales. 2. We analysed long-term vegetation records from two representative sites in the North American sagebrush-steppe ecosystem, spanning nine decades, to determine if empirical patterns were consistent with resilience theory, and to determine if cheatgrass Bromus tectorum invasion led to thresholds as currently envisioned by expert-based state-and-transition models (STM). These data span the entire history of cheatgrass invasion at these sites and provide a unique opportunity to assess the impacts of biotic invasion on ecosystem resilience. 3. We used univariate and multivariate statistical tools to identify unique plant communities and document the magnitude, frequency and directionality of community transitions through time. Community transitions were characterized by 37-47% dissimilarity in species composition, they were not evenly distributed through time, their frequency was not correlated with precipitation, and they could not be readily attributed to fire or grazing. Instead, at both sites, the majority of community transitions occurred within an 8-10year period of increasing cheatgrass density, became infrequent after cheatgrass density peaked, and thereafter transition frequency declined. 4. Greater cheatgrass density, replacement of native species and indication of asymmetry in community transitions suggest that thresholds may have been exceeded in response to cheatgrass invasion at one site (more arid), but not at the other site (less arid). Asymmetry in the direction of community transitions also identified communities that were at-risk' of cheatgrass invasion, as well as potential restoration pathways for recovery of pre-invasion states. 5. Synthesis and applications. These results illustrate the complexities associated with threshold identification, and indicate that criteria describing the frequency, magnitude, directionality and temporal scale of community transitions may provide greater insight into resilience theory and its application for ecosystem management. These criteria are likely to vary across biogeographic regions that are susceptible to cheatgrass invasion, and necessitate more in-depth assessments of thresholds and alternative states, than currently available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Restriction endonucleases interact with DNA at specific sites leading to cleavage of DNA. Bacterial DNA is protected from restriction endonuclease cleavage by modifying the DNA using a DNA methyltransferase. Based on their molecular structure, sequence recognition, cleavage position and cofactor requirements, restriction-modification (R-M) systems are classified into four groups. Type III R-M enzymes need to interact with two separate unmethylated DNA sequences in inversely repeated head-to-head orientations for efficient cleavage to occur at a defined location (25-27 bp downstream of one of the recognition sites). Like the Type I R-M enzymes, Type III R-M enzymes possess a sequence-specific ATPase activity for DNA cleavage. ATP hydrolysis is required for the long-distance communication between the sites before cleavage. Different models, based on 1D diffusion and/or 3D-DNA looping, exist to explain how the long-distance interaction between the two recognition sites takes place. Type III R-M systems are found in most sequenced bacteria. Genome sequencing of many pathogenic bacteria also shows the presence of a number of phase-variable Type III R-M systems, which play a role in virulence. A growing number of these enzymes are being subjected to biochemical and genetic studies, which, when combined with ongoing structural analyses, promise to provide details for mechanisms of DNA recognition and catalysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semi-rigid molecular tweezers 1, 3 and 4 bind picric acid with more than tenfold increment in tetrachloromethane as compared to chloroform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The baculovirus expression system using the Autographa californica nuclear polyhedrosis virus (AcNPV) has been extensively utilized for high-level expression of cloned foreign genes, driven by the strong viral promoters of polyhedrin (polh) and p10 encoding genes. A parallel system using Bombyx mori nuclear polyhedrosis virus (BmNPV) is much less exploited because the choice and variety of BmNPV-based transfer vectors are limited. Using a transient expression assay, we have demonstrated here that the heterologous promoters of the very late genes polh and p10 from AcNPV function as efficiently in BmN cells as the BmNPV promoters. The location of the cloned foreign gene with respect to the promoter sequences was critical for achieving the highest levels of expression, following the order +35 > +1 > -3 > -8 nucleotides (nt) with respect to the polh or p10 start codons. We have successfully generated recombinant BmNPV harboring AcNPV promoters by homeologous recombination between AcNPV-based transfer vectors and BmNPV genomic DNA. Infection of BmN cell lines with recombinant BmNPV showed a temporal expression pattern, reaching very high levels in 60-72 h post infection. The recombinant BmNPV harboring the firefly luciferase-encoding gene under the control of AcNPV polh or p10 promoters, on infection of the silkworm larvae led to the synthesis of large quantities of luciferase. Such larvae emanated significant luminiscence instantaneously on administration of the substrate luciferin resulting in 'glowing silkworms'. The virus-infected larvae continued to glow for several hours and revealed the most abundant distribution of virus in the fat bodies. In larval expression also, the highest levels were achieved when the reporter gene was located at +35 nt of the polh.