991 resultados para 280208 Computer Vision


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Log-polar image architectures, motivated by the structure of the human visual field, have long been investigated in computer vision for use in estimating motion parameters from an optical flow vector field. Practical problems with this approach have been: (i) dependence on assumed alignment of the visual and motion axes; (ii) sensitivity to occlusion form moving and stationary objects in the central visual field, where much of the numerical sensitivity is concentrated; and (iii) inaccuracy of the log-polar architecture (which is an approximation to the central 20°) for wide-field biological vision. In the present paper, we show that an algorithm based on generalization of the log-polar architecture; termed the log-dipolar sensor, provides a large improvement in performance relative to the usual log-polar sampling. Specifically, our algorithm: (i) is tolerant of large misalignmnet of the optical and motion axes; (ii) is insensitive to significant occlusion by objects of unknown motion; and (iii) represents a more correct analogy to the wide-field structure of human vision. Using the Helmholtz-Hodge decomposition to estimate the optical flow vector field on a log-dipolar sensor, we demonstrate these advantages, using synthetic optical flow maps as well as natural image sequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation proposes and demonstrates novel smart modules to solve challenging problems in the areas of imaging, communications, and displays. The smartness of the modules is due to their ability to be able to adapt to changes in operating environment and application using programmable devices, specifically, electronically variable focus lenses (ECVFLs) and digital micromirror devices (DMD). The proposed modules include imagers for laser characterization and general purpose imaging which smartly adapt to changes in irradiance, optical wireless communication systems which can adapt to the number of users and to changes in link length, and a smart laser projection display that smartly adjust the pixel size to achieve a high resolution projected image at each screen distance. The first part of the dissertation starts with the proposal of using an ECVFL to create a novel multimode laser beam characterizer for coherent light. This laser beam characterizer uses the ECVFL and a DMD so that no mechanical motion of optical components along the optical axis is required. This reduces the mechanical motion overhead that traditional laser beam characterizers have, making this laser beam characterizer more accurate and reliable. The smart laser beam characterizer is able to account for irradiance fluctuations in the source. Using image processing, the important parameters that describe multimode laser beam propagation have been successfully extracted for a multi-mode laser test source. Specifically, the laser beam analysis parameters measured are the M2 parameter, w0 the minimum beam waist, and zR the Rayleigh range. Next a general purpose incoherent light imager that has a high dynamic range (>100 dB) and automatically adjusts for variations in irradiance in the scene is proposed. Then a data efficient image sensor is demonstrated. The idea of this smart image sensor is to reduce the bandwidth needed for transmitting data from the sensor by only sending the information which is required for the specific application while discarding the unnecessary data. In this case, the imager demonstrated sends only information regarding the boundaries of objects in the image so that after transmission to a remote image viewing location, these boundaries can be used to map out objects in the original image. The second part of the dissertation proposes and demonstrates smart optical communications systems using ECVFLs. This starts with the proposal and demonstration of a zero propagation loss optical wireless link using visible light with experiments covering a 1 to 4 m range. By adjusting the focal length of the ECVFLs for this directed line-of-sight link (LOS) the laser beam propagation parameters are adjusted such that the maximum amount of transmitted optical power is captured by the receiver for each link length. This power budget saving enables a longer achievable link range, a better SNR/BER, or higher power efficiency since more received power means the transmitted power can be reduced. Afterwards, a smart dual mode optical wireless link is proposed and demonstrated using a laser and LED coupled to the ECVFL to provide for the first time features of high bandwidths and wide beam coverage. This optical wireless link combines the capabilities of smart directed LOS link from the previous section with a diffuse optical wireless link, thus achieving high data rates and robustness to blocking. The proposed smart system can switch from LOS mode to Diffuse mode when blocking occurs or operate in both modes simultaneously to accommodate multiple users and operate a high speed link if one of the users requires extra bandwidth. The last part of this section presents the design of fibre optic and free-space optical switches which use ECVFLs to deflect the beams to achieve switching operation. These switching modules can be used in the proposed optical wireless indoor network. The final section of the thesis presents a novel smart laser scanning display. The ECVFL is used to create the smallest beam spot size possible for the system designed at the distance of the screen. The smart laser scanning display increases the spatial resoluti on of the display for any given distance. A basic smart display operation has been tested for red light and a 4X improvement in pixel resolution for the image has been demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gemstone Team Vision

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at × 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 × 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Feature selection and feature weighting are useful techniques for improving the classification accuracy of K-nearest-neighbor (K-NN) rule. The term feature selection refers to algorithms that select the best subset of the input feature set. In feature weighting, each feature is multiplied by a weight value proportional to the ability of the feature to distinguish pattern classes. In this paper, a novel hybrid approach is proposed for simultaneous feature selection and feature weighting of K-NN rule based on Tabu Search (TS) heuristic. The proposed TS heuristic in combination with K-NN classifier is compared with several classifiers on various available data sets. The results have indicated a significant improvement in the performance in classification accuracy. The proposed TS heuristic is also compared with various feature selection algorithms. Experiments performed revealed that the proposed hybrid TS heuristic is superior to both simple TS and sequential search algorithms. We also present results for the classification of prostate cancer using multispectral images, an important problem in biomedicine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel approach based on the use of evolutionary agents for epipolar geometry estimation. In contrast to conventional nonlinear optimization methods, the proposed technique employs each agent to denote a minimal subset to compute the fundamental matrix, and considers the data set of correspondences as a 1D cellular environment, in which the agents inhabit and evolve. The agents execute some evolutionary behavior, and evolve autonomously in a vast solution space to reach the optimal (or near optima) result. Then three different techniques are proposed in order to improve the searching ability and computational efficiency of the original agents. Subset template enables agents to collaborate more efficiently with each other, and inherit accurate information from the whole agent set. Competitive evolutionary agent (CEA) and finite multiple evolutionary agent (FMEA) apply a better evolutionary strategy or decision rule, and focus on different aspects of the evolutionary process. Experimental results with both synthetic data and real images show that the proposed agent-based approaches perform better than other typical methods in terms of accuracy and speed, and are more robust to noise and outliers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Grey Level Co-occurrence Matrix (GLCM), one of the best known tool for texture analysis, estimates image properties related to second-order statistics. These image properties commonly known as Haralick texture features can be used for image classification, image segmentation, and remote sensing applications. However, their computations are highly intensive especially for very large images such as medical ones. Therefore, methods to accelerate their computations are highly desired. This paper proposes the use of programmable hardware to accelerate the calculation of GLCM and Haralick texture features. Further, as an example of the speedup offered by programmable logic, a multispectral computer vision system for automatic diagnosis of prostatic cancer has been implemented. The performance is then compared against a microprocessor based solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a typical shoeprint classification and retrieval system, the first step is to segment meaningful basic shapes and patterns in a noisy shoeprint image. This step has significant influence on shape descriptors and shoeprint indexing in the later stages. In this paper, we extend a recently developed denoising technique proposed by Buades, called non-local mean filtering, to give a more general model. In this model, the expected result of an operation on a pixel can be estimated by performing the same operation on all of its reference pixels in the same image. A working pixel’s reference pixels are those pixels whose neighbourhoods are similar to the working pixel’s neighbourhood. Similarity is based on the correlation between the local neighbourhoods of the working pixel and the reference pixel. We incorporate a special instance of this general case into thresholding a very noisy shoeprint image. Visual and quantitative comparisons with two benchmarking techniques, by Otsu and Kittler, are conducted in the last section, giving evidence of the effectiveness of our method for thresholding noisy shoeprint images.