885 resultados para 3D object recognition
Resumo:
This paper introduces the Interlevel Product (ILP) which is a transform based upon the Dual-Tree Complex Wavelet. Coefficients of the ILP have complex values whose magnitudes indicate the amplitude of multilevel features, and whose phases indicate the nature of these features (e.g. ridges vs. edges). In particular, the phases of ILP coefficients are approximately invariant to small shifts in the original images. We accordingly introduce this transform as a solution to coarse scale template matching, where alignment concerns between decimation of a target and decimation of a larger search image can be mitigated, and computational efficiency can be maintained. Furthermore, template matching with ILP coefficients can provide several intuitive "near-matches" that may be of interest in image retrieval applications. © 2005 IEEE.
Resumo:
In this paper, a novel cortex-inspired feed-forward hierarchical object recognition system based on complex wavelets is proposed and tested. Complex wavelets contain three key properties for object representation: shift invariance, which enables the extraction of stable local features; good directional selectivity, which simplifies the determination of image orientations; and limited redundancy, which allows for efficient signal analysis using the multi-resolution decomposition offered by complex wavelets. In this paper, we propose a complete cortex-inspired object recognition system based on complex wavelets. We find that the implementation of the HMAX model for object recognition in [1, 2] is rather over-complete and includes too much redundant information and processing. We have optimized the structure of the model to make it more efficient. Specifically, we have used the Caltech 5 standard dataset to compare with Serre's model in [2] (which employs Gabor filter bands). Results demonstrate that the complex wavelet model achieves a speed improvement of about 4 times over the Serre model and gives comparable recognition performance. © 2011 IEEE.
Resumo:
This paper presents a method for vote-based 3D shape recognition and registration, in particular using mean shift on 3D pose votes in the space of direct similarity transforms for the first time. We introduce a new distance between poses in this spacethe SRT distance. It is left-invariant, unlike Euclidean distance, and has a unique, closed-form mean, in contrast to Riemannian distance, so is fast to compute. We demonstrate improved performance over the state of the art in both recognition and registration on a real and challenging dataset, by comparing our distance with others in a mean shift framework, as well as with the commonly used Hough voting approach. © 2011 IEEE.
Resumo:
Tracking methods have the potential to retrieve the spatial location of project related entities such as personnel and equipment at construction sites, which can facilitate several construction management tasks. Existing tracking methods are mainly based on Radio Frequency (RF) technologies and thus require manual deployment of tags. On construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. To address these limitations, this paper proposes an alternate 3D tracking method based on vision. It operates by tracking the designated object in 2D video frames and correlating the tracking results from multiple pre-calibrated views using epipolar geometry. The methodology presented in this paper has been implemented and tested on videos taken in controlled experimental conditions. Results are compared with the actual 3D positions to validate its performance.
Resumo:
On the basis of DBF nets proposed by Wang Shoujue, the model and properties of DBF neural network were discussed in this paper. When applied in pattern recognition, the algorithm and implement on hardware were presented respectively. We did experiments on recognition of omnidirectionally oriented rigid objects on the same level, using direction basis function neural networks, which acts by the method of covering the high dimensional geometrical distribution of the sample set in the feature space. Many animal and vehicle models (even with rather similar shapes) were recognized omnidirectionally thousands of times. For total 8800 tests, the correct recognition rate is 98.75%, the error rate and the rejection rate are 0.5% and 1.25% respectively. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we propose a new scheme for omnidirectional object-recognition in free space. The proposed scheme divides above problem into several onmidirectional object-recognition with different depression angles. An onmidirectional object-recognition system with oblique observation directions based on a new recognition theory-Biomimetic Pattern Recognition (BPR) is discussed in detail. Based on it, we can get the size of training samples in the onmidirectional object-recognition system in free space. Omnidirection ally cognitive tests were done on various kinds of animal models of rather similar shapes. For the total 8400 tests, the correct recognition rate is 99.89%. The rejection rate is 0.11% and on the condition of zero error rates. Experimental results are presented to show that the proposed approach outperforms three types of SVMs with either a three degree polynomial kernel or a radial basis function kernel.
Resumo:
Many current recognition systems use constrained search to locate objects in cluttered environments. Previous formal analysis has shown that the expected amount of search is quadratic in the number of model and data features, if all the data is known to come from a sinlge object, but is exponential when spurious data is included. If one can group the data into subsets likely to have come from a single object, then terminating the search once a "good enough" interpretation is found reduces the expected search to cubic. Without successful grouping, terminated search is still exponential. These results apply to finding instances of a known object in the data. In this paper, we turn to the problem of selecting models from a library, and examine the combinatorics of determining that a candidate object is not present in the data. We show that the expected search is again exponential, implying that naﶥ approaches to indexing are likely to carry an expensive overhead, since an exponential amount of work is needed to week out each of the incorrect models. The analytic results are shown to be in agreement with empirical data for cluttered object recognition.
Resumo:
We report a series of psychophysical experiments that explore different aspects of the problem of object representation and recognition in human vision. Contrary to the paradigmatic view which holds that the representations are three-dimensional and object-centered, the results consistently support the notion of view-specific representations that include at most partial depth information. In simulated experiments that involved the same stimuli shown to the human subjects, computational models built around two-dimensional multiple-view representations replicated our main psychophysical results, including patterns of generalization errors and the time course of perceptual learning.
Resumo:
In order to recognize an object in an image, we must determine the best transformation from object model to the image. In this paper, we show that for features from coplanar surfaces which undergo linear transformations in space, there exist projections invariant to the surface motions up to rotations in the image field. To use this property, we propose a new alignment approach to object recognition based on centroid alignment of corresponding feature groups. This method uses only a single pair of 2D model and data. Experimental results show the robustness of the proposed method against perturbations of feature positions.
Resumo:
How does the brain recognize three-dimensional objects? We trained monkeys to recognize computer rendered objects presented from an arbitrarily chosen training view, and subsequently tested their ability to generalize recognition for other views. Our results provide additional evidence in favor of with a recognition model that accomplishes view-invariant performance by storing a limited number of object views or templates together with the capacity to interpolate between the templates (Poggio and Edelman, 1990).