919 resultados para Optical pattern recognition Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Handwritten character recognition is always a frontier area of research in the field of pattern recognition and image processing and there is a large demand for OCR on hand written documents. Even though, sufficient studies have performed in foreign scripts like Chinese, Japanese and Arabic characters, only a very few work can be traced for handwritten character recognition of Indian scripts especially for the South Indian scripts. This paper provides an overview of offline handwritten character recognition in South Indian Scripts, namely Malayalam, Tamil, Kannada and Telungu

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die stereoskopische 3-D-Darstellung beruht auf der naturgetreuen Präsentation verschiedener Perspektiven für das rechte und linke Auge. Sie erlangt in der Medizin, der Architektur, im Design sowie bei Computerspielen und im Kino, zukünftig möglicherweise auch im Fernsehen, eine immer größere Bedeutung. 3-D-Displays dienen der zusätzlichen Wiedergabe der räumlichen Tiefe und lassen sich grob in die vier Gruppen Stereoskope und Head-mounted-Displays, Brillensysteme, autostereoskopische Displays sowie echte 3-D-Displays einteilen. Darunter besitzt der autostereoskopische Ansatz ohne Brillen, bei dem N≥2 Perspektiven genutzt werden, ein hohes Potenzial. Die beste Qualität in dieser Gruppe kann mit der Methode der Integral Photography, die sowohl horizontale als auch vertikale Parallaxe kodiert, erreicht werden. Allerdings ist das Verfahren sehr aufwendig und wird deshalb wenig genutzt. Den besten Kompromiss zwischen Leistung und Preis bieten präzise gefertigte Linsenrasterscheiben (LRS), die hinsichtlich Lichtausbeute und optischen Eigenschaften den bereits früher bekannten Barrieremasken überlegen sind. Insbesondere für die ergonomisch günstige Multiperspektiven-3-D-Darstellung wird eine hohe physikalische Monitorauflösung benötigt. Diese ist bei modernen TFT-Displays schon recht hoch. Eine weitere Verbesserung mit dem theoretischen Faktor drei erreicht man durch gezielte Ansteuerung der einzelnen, nebeneinander angeordneten Subpixel in den Farben Rot, Grün und Blau. Ermöglicht wird dies durch die um etwa eine Größenordnung geringere Farbauflösung des menschlichen visuellen Systems im Vergleich zur Helligkeitsauflösung. Somit gelingt die Implementierung einer Subpixel-Filterung, welche entsprechend den physiologischen Gegebenheiten mit dem in Luminanz und Chrominanz trennenden YUV-Farbmodell arbeitet. Weiterhin erweist sich eine Schrägstellung der Linsen im Verhältnis von 1:6 als günstig. Farbstörungen werden minimiert, und die Schärfe der Bilder wird durch eine weniger systematische Vergrößerung der technologisch unvermeidbaren Trennelemente zwischen den Subpixeln erhöht. Der Grad der Schrägstellung ist frei wählbar. In diesem Sinne ist die Filterung als adaptiv an den Neigungswinkel zu verstehen, obwohl dieser Wert für einen konkreten 3-D-Monitor eine Invariante darstellt. Die zu maximierende Zielgröße ist der Parameter Perspektiven-Pixel als Produkt aus Anzahl der Perspektiven N und der effektiven Auflösung pro Perspektive. Der Idealfall einer Verdreifachung wird praktisch nicht erreicht. Messungen mit Hilfe von Testbildern sowie Schrifterkennungstests lieferten einen Wert von knapp über 2. Dies ist trotzdem als eine signifikante Verbesserung der Qualität der 3-D-Darstellung anzusehen. In der Zukunft sind weitere Verbesserungen hinsichtlich der Zielgröße durch Nutzung neuer, feiner als TFT auflösender Technologien wie LCoS oder OLED zu erwarten. Eine Kombination mit der vorgeschlagenen Filtermethode wird natürlich weiterhin möglich und ggf. auch sinnvoll sein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the development of a highly parameterised 3-D model able to adopt the shapes of a wide variety of different classes of vehicles (cars, vans, buses, etc), and its subsequent specialisation to a generic car class which accounts for most commonly encountered types of car (includng saloon, hatchback and estate cars). An interactive tool has been developed to obtain sample data for vehicles from video images. A PCA description of the manually sampled data provides a deformable model in which a single instance is described as a 6 parameter vector. Both the pose and the structure of a car can be recovered by fitting the PCA model to an image. The recovered description is sufficiently accurate to discriminate between vehicle sub-classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In all biological processes, protein molecules and other small molecules interact to function and form transient macromolecular complexes. This interaction of two or more molecules can be described by a docking event. Docking is an important phase for structure-based drug design strategies, as it can be used as a method to simulate protein-ligand interactions. Various docking programs exist that allow automated docking, but most of them have limited visualization and user interaction. It would be advantageous if scientists could visualize the molecules participating in the docking process, manipulate their structures and manually dock them before submitting the new conformations to an automated docking process in an immersive environment, which can help stimulate the design/docking process. This also could greatly reduce docking time and resources. To achieve this, we propose a new virtual modelling/docking program, whereby the advantages of virtual modelling programs and the efficiency of the algorithms in existing docking programs will be merged.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LIght Detection And Ranging (LIDAR) data for terrain and land surveying has contributed to many environmental, engineering and civil applications. However, the analysis of Digital Surface Models (DSMs) from complex LIDAR data is still challenging. Commonly, the first task to investigate LIDAR data point clouds is to separate ground and object points as a preparatory step for further object classification. In this paper, the authors present a novel unsupervised segmentation algorithm-skewness balancing to separate object and ground points efficiently from high resolution LIDAR point clouds by exploiting statistical moments. The results presented in this paper have shown its robustness and its potential for commercial applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we address issues in segmentation Of remotely sensed LIDAR (LIght Detection And Ranging) data. The LIDAR data, which were captured by airborne laser scanner, contain 2.5 dimensional (2.5D) terrain surface height information, e.g. houses, vegetation, flat field, river, basin, etc. Our aim in this paper is to segment ground (flat field)from non-ground (houses and high vegetation) in hilly urban areas. By projecting the 2.5D data onto a surface, we obtain a texture map as a grey-level image. Based on the image, Gabor wavelet filters are applied to generate Gabor wavelet features. These features are then grouped into various windows. Among these windows, a combination of their first and second order of statistics is used as a measure to determine the surface properties. The test results have shown that ground areas can successfully be segmented from LIDAR data. Most buildings and high vegetation can be detected. In addition, Gabor wavelet transform can partially remove hill or slope effects in the original data by tuning Gabor parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a fuzzy Markov random field (FMRF) model is used to segment land-objects into free, grass, building, and road regions by fusing remotely, sensed LIDAR data and co-registered color bands, i.e. scanned aerial color (RGB) photo and near infra-red (NIR) photo. An FMRF model is defined as a Markov random field (MRF) model in a fuzzy domain. Three optimization algorithms in the FMRF model, i.e. Lagrange multiplier (LM), iterated conditional mode (ICM), and simulated annealing (SA), are compared with respect to the computational cost and segmentation accuracy. The results have shown that the FMRF model-based ICM algorithm balances the computational cost and segmentation accuracy in land-cover segmentation from LIDAR data and co-registered bands.