730 resultados para Arabic character recognition
Resumo:
This paper describes a low complexity strategy for detecting and recognizing text signs automatically. Traditional approaches use large image algorithms for detecting the text sign, followed by the application of an Optical Character Recognition (OCR) algorithm in the previously identified areas. This paper proposes a new architecture that applies the OCR to a whole lightly treated image and then carries out the text detection process of the OCR output. The strategy presented in this paper significantly reduces the processing time required for text localization in an image, while guaranteeing a high recognition rate. This strategy will facilitate the incorporation of video processing-based applications into the automatic detection of text sign similar to that of a smartphone. These applications will increase the autonomy of visually impaired people in their daily life.
Resumo:
A simple technique is presented for improving the robustness of the n-tuple recognition method against inauspicious choices of architectural parameters, guarding against the saturation problem, and improving the utilisation of small data sets. Experiments are reported which confirm that the method significantly improves performance and reduces saturation in character recognition problems.
Resumo:
After many years of scholar study, manuscript collections continue to be an important source of novel information for scholars, concerning both the history of earlier times as well as the development of cultural documentation over the centuries. D-SCRIBE project aims to support and facilitate current and future efforts in manuscript digitization and processing. It strives toward the creation of a comprehensive software product, which can assist the content holders in turning an archive of manuscripts into a digital collection using automated methods. In this paper, we focus on the problem of recognizing early Christian Greek manuscripts. We propose a novel digital image binarization scheme for low quality historical documents allowing further content exploitation in an efficient way. Based on the existence of closed cavity regions in the majority of characters and character ligatures in these scripts, we propose a novel, segmentation-free, fast and efficient technique that assists the recognition procedure by tracing and recognizing the most frequently appearing characters or character ligatures.
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation.
Resumo:
Relatório de Estágio apresentado para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Edição de Texto
Resumo:
Aquest projecte consisteix en l'estudi, comparació i implementació en hardware d'algoritmes de reconeixement de caràcters per integrar en un sistema intel·ligent de captura d'imatges. Aquest sistema, integrat per una càmera amb format i característiques específiques i que anirà acoblat a un comptador d'aigua tradicional, en captarà imatges i les enviarà per RF al punt de recepció de la companyia. L'objectiu principal consisteix en aconseguir un disseny que redueixi al màxim la quantitat d'informació per transmetre, tenint en compte les limitacions de l'entorn.
Resumo:
This report describes the results of the research project investigating the use of advanced field data acquisition technologies for lowa transponation agencies. The objectives of the research project were to (1) research and evaluate current data acquisition technologies for field data collection, manipulation, and reporting; (2) identify the current field data collection approach and the interest level in applying current technologies within Iowa transportation agencies; and (3) summarize findings, prioritize technology needs, and provide recommendations regarding suitable applications for future development. A steering committee consisting oretate, city, and county transportation officials provided guidance during this project. Technologies considered in this study included (1) data storage (bar coding, radio frequency identification, touch buttons, magnetic stripes, and video logging); (2) data recognition (voice recognition and optical character recognition); (3) field referencing systems (global positioning systems [GPS] and geographic information systems [GIs]); (4) data transmission (radio frequency data communications and electronic data interchange); and (5) portable computers (pen-based computers). The literature review revealed that many of these technologies could have useful applications in the transponation industry. A survey was developed to explain current data collection methods and identify the interest in using advanced field data collection technologies. Surveys were sent out to county and city engineers and state representatives responsible for certain programs (e.g., maintenance management and construction management). Results showed that almost all field data are collected using manual approaches and are hand-carried to the office where they are either entered into a computer or manually stored. A lack of standardization was apparent for the type of software applications used by each agency--even the types of forms used to manually collect data differed by agency. Furthermore, interest in using advanced field data collection technologies depended upon the technology, program (e.g.. pavement or sign management), and agency type (e.g., state, city, or county). The state and larger cities and counties seemed to be interested in using several of the technologies, whereas smaller agencies appeared to have very little interest in using advanced techniques to capture data. A more thorough analysis of the survey results is provided in the report. Recommendations are made to enhance the use of advanced field data acquisition technologies in Iowa transportation agencies: (1) Appoint a statewide task group to coordinate the effort to automate field data collection and reporting within the Iowa transportation agencies. Subgroups representing the cities, counties, and state should be formed with oversight provided by the statewide task group. (2) Educate employees so that they become familiar with the various field data acquisition technologies.
Resumo:
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.
Resumo:
Optisella merkintunnistuksella on tärkeä rooli nykypäivän automaatiossa. Optisen merkintunnistuksen eri sovellusalueet vaihtelevat dokumenttien tekstin tunnistamisesta ajoneuvojen tunnistamiseen ja erilaisten tuotanto- ja kokoonpanolinjojen automaatioon ja laadun tarkkailuun. Tässä työssä keskitytään optisen merkintunnistuksen käyttöön satamaliikenteessä. Työ jakaantuu kahteen osaan. Ensimmäisessä osassa esitellään satamien kannalta kaksi yleisintä ja samalla tärkeintä optisen merkintunnistuksen sovellusaluetta: rekisterikilpien tunnistus ja konttien tunnistus. Työn jälkimmäinen osa käsittelee junavaunujen tunnistamista optisen merkintunnistuksen avulla. Satamissa toimiva vaunukalusto ja niissä esiintyvät tunnisteet esitellään. Vaunujen tunnistamisen toteuttava konenäköjärjestelmä, sen vaativat laitteet sekä kuvankäsittelyn ja kuva-analyysin vaiheet käydään läpi. Kuva-analyysion jaettu työssä neljään päävaiheeseen: esikäsittely, segmentointi, piirreirrotus ja luokittelu. Kustakin vaiheesta esitetään useita eri menetelmiä, joiden käyttökelpoisuutta esitettyyn ongelmaan arvioidaan työn lopussa.
Resumo:
A new procedure for the classification of lower case English language characters is presented in this work . The character image is binarised and the binary image is further grouped into sixteen smaller areas ,called Cells . Each cell is assigned a name depending upon the contour present in the cell and occupancy of the image contour in the cell. A data reduction procedure called Filtering is adopted to eliminate undesirable redundant information for reducing complexity during further processing steps . The filtered data is fed into a primitive extractor where extraction of primitives is done . Syntactic methods are employed for the classification of the character . A decision tree is used for the interaction of the various components in the scheme . 1ike the primitive extraction and character recognition. A character is recognized by the primitive by primitive construction of its description . Openended inventories are used for including variants of the characters and also adding new members to the general class . Computer implementation of the proposal is discussed at the end using handwritten character samples . Results are analyzed and suggestions for future studies are made. The advantages of the proposal are discussed in detail .
Resumo:
Handwriting is an acquired tool used for communication of one's observations or feelings. Factors that inuence a person's handwriting not only dependent on the individual's bio-mechanical constraints, handwriting education received, writing instrument, type of paper, background, but also factors like stress, motivation and the purpose of the handwriting. Despite the high variation in a person's handwriting, recent results from different writer identification studies have shown that it possesses sufficient individual traits to be used as an identification method. Handwriting as a behavioral biometric has had the interest of researchers for a long time. But recently it has been enjoying new interest due to an increased need and effort to deal with problems ranging from white-collar crime to terrorist threats. The identification of the writer based on a piece of handwriting is a challenging task for pattern recognition. The main objective of this thesis is to develop a text independent writer identification system for Malayalam Handwriting. The study also extends to developing a framework for online character recognition of Grantha script and Malayalam characters
Resumo:
Die stereoskopische 3-D-Darstellung beruht auf der naturgetreuen Präsentation verschiedener Perspektiven für das rechte und linke Auge. Sie erlangt in der Medizin, der Architektur, im Design sowie bei Computerspielen und im Kino, zukünftig möglicherweise auch im Fernsehen, eine immer größere Bedeutung. 3-D-Displays dienen der zusätzlichen Wiedergabe der räumlichen Tiefe und lassen sich grob in die vier Gruppen Stereoskope und Head-mounted-Displays, Brillensysteme, autostereoskopische Displays sowie echte 3-D-Displays einteilen. Darunter besitzt der autostereoskopische Ansatz ohne Brillen, bei dem N≥2 Perspektiven genutzt werden, ein hohes Potenzial. Die beste Qualität in dieser Gruppe kann mit der Methode der Integral Photography, die sowohl horizontale als auch vertikale Parallaxe kodiert, erreicht werden. Allerdings ist das Verfahren sehr aufwendig und wird deshalb wenig genutzt. Den besten Kompromiss zwischen Leistung und Preis bieten präzise gefertigte Linsenrasterscheiben (LRS), die hinsichtlich Lichtausbeute und optischen Eigenschaften den bereits früher bekannten Barrieremasken überlegen sind. Insbesondere für die ergonomisch günstige Multiperspektiven-3-D-Darstellung wird eine hohe physikalische Monitorauflösung benötigt. Diese ist bei modernen TFT-Displays schon recht hoch. Eine weitere Verbesserung mit dem theoretischen Faktor drei erreicht man durch gezielte Ansteuerung der einzelnen, nebeneinander angeordneten Subpixel in den Farben Rot, Grün und Blau. Ermöglicht wird dies durch die um etwa eine Größenordnung geringere Farbauflösung des menschlichen visuellen Systems im Vergleich zur Helligkeitsauflösung. Somit gelingt die Implementierung einer Subpixel-Filterung, welche entsprechend den physiologischen Gegebenheiten mit dem in Luminanz und Chrominanz trennenden YUV-Farbmodell arbeitet. Weiterhin erweist sich eine Schrägstellung der Linsen im Verhältnis von 1:6 als günstig. Farbstörungen werden minimiert, und die Schärfe der Bilder wird durch eine weniger systematische Vergrößerung der technologisch unvermeidbaren Trennelemente zwischen den Subpixeln erhöht. Der Grad der Schrägstellung ist frei wählbar. In diesem Sinne ist die Filterung als adaptiv an den Neigungswinkel zu verstehen, obwohl dieser Wert für einen konkreten 3-D-Monitor eine Invariante darstellt. Die zu maximierende Zielgröße ist der Parameter Perspektiven-Pixel als Produkt aus Anzahl der Perspektiven N und der effektiven Auflösung pro Perspektive. Der Idealfall einer Verdreifachung wird praktisch nicht erreicht. Messungen mit Hilfe von Testbildern sowie Schrifterkennungstests lieferten einen Wert von knapp über 2. Dies ist trotzdem als eine signifikante Verbesserung der Qualität der 3-D-Darstellung anzusehen. In der Zukunft sind weitere Verbesserungen hinsichtlich der Zielgröße durch Nutzung neuer, feiner als TFT auflösender Technologien wie LCoS oder OLED zu erwarten. Eine Kombination mit der vorgeschlagenen Filtermethode wird natürlich weiterhin möglich und ggf. auch sinnvoll sein.
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Este Proyecto Fin de Carrera trata sobre el reconocimiento e identificación de caracteres de matrículas de automóviles. Este tipo de sistemas de reconocimiento también se los conoce mundialmente como sistemas ANPR ("Automatic Number Plate Recognition") o LPR ("License Plate Recognition"). La gran cantidad de vehículos y logística que se mueve cada segundo por todo el planeta, hace necesaria su registro para su tratamiento y control. Por ello, es necesario implementar un sistema que pueda identificar correctamente estos recursos, para su posterior procesado, construyendo así una herramienta útil, ágil y dinámica. El presente trabajo ha sido estructurado en varias partes. La primera de ellas nos muestra los objetivos y las motivaciones que se persiguen con la realización de este proyecto. En la segunda, se abordan y desarrollan todos los diferentes procesos teóricos y técnicos, así como matemáticos, que forman un sistema ANPR común, con el fin de implementar una aplicación práctica que pueda demostrar la utilidad de estos en cualquier situación. En la tercera, se desarrolla esa parte práctica en la que se apoya la base teórica del trabajo. En ésta se describen y desarrollan los diversos algoritmos, creados con el fin de estudiar y comprobar todo lo planteado hasta ahora, así como observar su comportamiento. Se implementan varios procesos característicos del reconocimiento de caracteres y patrones, como la detección de áreas o patrones, rotado y transformación de imágenes, procesos de detección de bordes, segmentación de caracteres y patrones, umbralización y normalización, extracción de características y patrones, redes neuronales, y finalmente el reconocimiento óptico de caracteres o comúnmente conocido como OCR. La última parte refleja los resultados obtenidos a partir del sistema de reconocimiento de caracteres implementado para el trabajo y se exponen las conclusiones extraídas a partir de éste. Finalmente se plantean las líneas futuras de mejora, desarrollo e investigación, para poder realizar un sistema más eficiente y global. This Thesis deals about license plate characters recognition and identification. These kinds of systems are also known worldwide as ANPR systems ("Automatic Number Plate Recognition") or LPR ("License Plate Recognition"). The great number of vehicles and logistics moving every second all over the world, requires a registration for treatment and control. Thereby, it’s therefore necessary to implement a system that can identify correctly these resources, for further processing, thus building a useful, flexible and dynamic tool. This work has been structured into several parts. The first one shows the objectives and motivations attained by the completion of this project. In the second part, it’s developed all the different theoretical and technical processes, forming a common ANPR system in order to implement a practical application that can demonstrate the usefulness of these ones on any situation. In the third, the practical part is developed, which is based on the theoretical work. In this one are described and developed various algorithms, created to study and verify all the questions until now suggested, and complain the behavior of these systems. Several recognition of characters and patterns characteristic processes are implemented, such as areas or patterns detection, image rotation and transformation, edge detection processes, patterns and character segmentation, thresholding and normalization, features and patterns extraction, neural networks, and finally the optical character recognition or commonly known like OCR. The last part shows the results obtained from the character recognition system implemented for this thesis and the outlines conclusions drawn from it. Finally, future lines of improvement, research and development are proposed, in order to make a more efficient and comprehensive system.