39 resultados para Computer Imaging, Vision, Pattern Recognition and Graphics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These are the full proceedings of the conference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual pigments, the molecules in photoreceptors that initiate the process of vision, are inherently dichroic, differentially absorbing light according to its axis of polarization. Many animals have taken advantage of this property to build receptor systems capable of analyzing the polarization of incoming light, as polarized light is abundant in natural scenes (commonly being produced by scattering or reflection). Such polarization sensitivity has long been associated with behavioral tasks like orientation or navigation. However, only recently have we become aware that it can be incorporated into a high-level visual perception akin to color vision, permitting segmentation of a viewed scene into regions that differ in their polarization. By analogy to color vision, we call this capacity polarization vision. It is apparently used for tasks like those that color vision specializes in: contrast enhancement, camouflage breaking, object recognition, and signal detection and discrimination. While color is very useful in terrestrial or shallow-water environments, it is an unreliable cue deeper in water due to the spectral modification of light as it travels through water of various depths or of varying optical quality. Here, polarization vision has special utility and consequently has evolved in numerous marine species, as well as at least one terrestrial animal. In this review, we consider recent findings concerning polarization vision and its significance in biological signaling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a new second-order method of texture analysis called Adaptive Multi-Scale Grey Level Co-occurrence Matrix (AMSGLCM), based on the well-known Grey Level Co-occurrence Matrix (GLCM) method. The method deviates significantly from GLCM in that features are extracted, not via a fixed 2D weighting function of co-occurrence matrix elements, but by a variable summation of matrix elements in 3D localized neighborhoods. We subsequently present a new methodology for extracting optimized, highly discriminant features from these localized areas using adaptive Gaussian weighting functions. Genetic Algorithm (GA) optimization is used to produce a set of features whose classification worth is evaluated by discriminatory power and feature correlation considerations. We critically appraised the performance of our method and GLCM in pairwise classification of images from visually similar texture classes, captured from Markov Random Field (MRF) synthesized, natural, and biological origins. In these cross-validated classification trials, our method demonstrated significant benefits over GLCM, including increased feature discriminatory power, automatic feature adaptability, and significantly improved classification performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: Polysomnography (PSG) is the current standard protocol for sleep disordered breathing (SDB) investigation in children. Presently, there are limited reliable screening tests for both central (CE) and obstructive (OE) respiratory events. This study compared three indices, derived from pulse oximetry and electrocardiogram ( ECG), with the PSG gold standard. These indices were heart rate (HR) variability, arterial blood oxygen de-saturation (SaO(2)) and pulse transit time (PTT). Methods: 15 children (12 male) from routine PSG studies were recruited (aged 3 - 14 years). The characteristics of the three indices were based on known criteria for respiratory events (RPE). Their estimation singly and in combination was evaluated with simultaneous scored PSG recordings. Results: 215 RPE and 215 tidal breathing events were analysed. For OE, the obtained sensitivity was HR (0.703), SaO(2) (0.047), PTT (0.750), considering all three indices (0) and either of the indices (0.828) while specificity was (0.891), (0.938), (0.922), (0.953) and (0.859) respectively. For CE, the sensitivity was HR (0.715), SaO(2) (0.278), PTT (0.662), considering all indices (0.040) and either of the indices (0.868) while specificity was (0.815), (0.954), (0.901), (0.960) and (0.762) accordingly. Conclusions: Preliminary findings herein suggest that the later combination of these non-invasive indices to be a promising screening method of SDB in children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an innovative approach for signature verification and forgery detection based on fuzzy modeling. The signature image is binarized and resized to a fixed size window and is then thinned. The thinned image is then partitioned into a fixed number of eight sub-images called boxes. This partition is done using the horizontal density approximation approach. Each sub-image is then further resized and again partitioned into twelve further sub-images using the uniform partitioning approach. The features of consideration are normalized vector angle (α) from each box. Each feature extracted from sample signatures gives rise to a fuzzy set. Since the choice of a proper fuzzification function is crucial for verification, we have devised a new fuzzification function with structural parameters, which is able to adapt to the variations in fuzzy sets. This function is employed to develop a complete forgery detection and verification system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a new scheme for off-line recognition of multi-font numerals using the Takagi-Sugeno (TS) model. In this scheme, the binary image of a character is partitioned into a fixed number of sub-images called boxes. The features consist of normalized vector distances (gamma) from each box. Each feature extracted from different fonts gives rise to a fuzzy set. However, when we have a small number of fonts as in the case of multi-font numerals, the choice of a proper fuzzification function is crucial. Hence, we have devised a new fuzzification function involving parameters, which take account of the variations in the fuzzy sets. The new fuzzification function is employed in the TS model for the recognition of multi-font numerals.