851 resultados para computer vision face recognition detection voice recognition sistemi biometrici iOS
Resumo:
The objective of this study was to determine the potential of mid-infrared spectroscopy coupled with multidimensional statistical analysis for the prediction of processed cheese instrumental texture and meltability attributes. Processed cheeses (n = 32) of varying composition were manufactured in a pilot plant. Following two and four weeks storage at 4 degrees C samples were analysed using texture profile analysis, two meltability tests (computer vision, Olson and Price) and mid-infrared spectroscopy (4000-640 cm(-1)). Partial least squares regression was used to develop predictive models for all measured attributes. Five attributes were successfully modelled with varying degrees of accuracy. The computer vision meltability model allowed for discrimination between high and low melt values (R-2 = 0.64). The hardness and springiness models gave approximate quantitative results (R-2 = 0.77) and the cohesiveness (R-2 = 0.81) and Olson and Price meltability (R-2 = 0.88) models gave good prediction results. (c) 2006 Elsevier Ltd. All rights reserved..
Resumo:
The technique of constructing a transformation, or regrading, of a discrete data set such that the histogram of the transformed data matches a given reference histogram is commonly known as histogram modification. The technique is widely used for image enhancement and normalization. A method which has been previously derived for producing such a regrading is shown to be “best” in the sense that it minimizes the error between the cumulative histogram of the transformed data and that of the given reference function, over all single-valued, monotone, discrete transformations of the data. Techniques for smoothed regrading, which provide a means of balancing the error in matching a given reference histogram against the information lost with respect to a linear transformation are also examined. The smoothed regradings are shown to optimize certain cost functionals. Numerical algorithms for generating the smoothed regradings, which are simple and efficient to implement, are described, and practical applications to the processing of LANDSAT image data are discussed.
Resumo:
The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.
Resumo:
This paper presents a neuroscience inspired information theoretic approach to motion segmentation. Robust motion segmentation represents a fundamental first stage in many surveillance tasks. As an alternative to widely adopted individual segmentation approaches, which are challenged in different ways by imagery exhibiting a wide range of environmental variation and irrelevant motion, this paper presents a new biologically-inspired approach which computes the multivariate mutual information between multiple complementary motion segmentation outputs. Performance evaluation across a range of datasets and against competing segmentation methods demonstrates robust performance.
Resumo:
Sparse coding aims to find a more compact representation based on a set of dictionary atoms. A well-known technique looking at 2D sparsity is the low rank representation (LRR). However, in many computer vision applications, data often originate from a manifold, which is equipped with some Riemannian geometry. In this case, the existing LRR becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to applications. In this paper, we generalize the LRR over the Euclidean space to the LRR model over a specific Rimannian manifold—the manifold of symmetric positive matrices (SPD). Experiments on several computer vision datasets showcase its noise robustness and superior performance on classification and segmentation compared with state-of-the-art approaches.
Resumo:
Object selection refers to the mechanism of extracting objects of interest while ignoring other objects and background in a given visual scene. It is a fundamental issue for many computer vision and image analysis techniques and it is still a challenging task to artificial Visual systems. Chaotic phase synchronization takes place in cases involving almost identical dynamical systems and it means that the phase difference between the systems is kept bounded over the time, while their amplitudes remain chaotic and may be uncorrelated. Instead of complete synchronization, phase synchronization is believed to be a mechanism for neural integration in brain. In this paper, an object selection model is proposed. Oscillators in the network representing the salient object in a given scene are phase synchronized, while no phase synchronization occurs for background objects. In this way, the salient object can be extracted. In this model, a shift mechanism is also introduced to change attention from one object to another. Computer simulations show that the model produces some results similar to those observed in natural vision systems.
Resumo:
The national railway administrations in Scandinavia, Germany, and Austria mainly resort to manual inspections to control vegetation growth along railway embankments. Manually inspecting railways is slow and time consuming. A more worrying aspect concerns the fact that human observers are often unable to estimate the true cover of vegetation on railway embankments. Further human observers often tend to disagree with each other when more than one observer is engaged for inspection. Lack of proper techniques to identify the true cover of vegetation even result in the excess usage of herbicides; seriously harming the environment and threating the ecology. Hence work in this study has investigated aspects relevant to human variationand agreement to be able to report better inspection routines. This was studied by mainly carrying out two separate yet relevant investigations.First, thirteen observers were separately asked to estimate the vegetation cover in nine imagesacquired (in nadir view) over the railway tracks. All such estimates were compared relatively and an analysis of variance resulted in a significant difference on the observers’ cover estimates (p<0.05). Bearing in difference between the observers, a second follow-up field-study on the railway tracks was initiated and properly investigated. Two railway segments (strata) representingdifferent levels of vegetationwere carefully selected. Five sample plots (each covering an area of one-by-one meter) were randomizedfrom each stratumalong the rails from the aforementioned segments and ten images were acquired in nadir view. Further three observers (with knowledge in the railway maintenance domain) were separately asked to estimate the plant cover by visually examining theplots. Again an analysis of variance resulted in a significant difference on the observers’ cover estimates (p<0.05) confirming the result from the first investigation.The differences in observations are compared against a computer vision algorithm which detects the "true" cover of vegetation in a given image. The true cover is defined as the amount of greenish pixels in each image as detected by the computer vision algorithm. Results achieved through comparison strongly indicate that inconsistency is prevalent among the estimates reported by the observers. Hence, an automated approach reporting the use of computer vision is suggested, thus transferring the manual inspections into objective monitored inspections
Resumo:
Computer vision is a field that uses techniques to acquire, process, analyze and understand images from the real world in order to produce numeric or symbolic information in the form of decisions [1]. This project aims to use computer vision to prepare an app to analyze a Madeira Wine and characterize it (identify its variety) by its color. Dry or sweet wines, young or old wines have a specific color. It uses techniques to compare histograms in order to analyze the images taken from a test sample inside a special container designed for this purpose. The color analysis from a wine sample using an image captured by a smartphone can be difficult. Many factors affect the captured image such as, light conditions, the background of the sample container due to the many positions the photo can be taken (different to capture facing a white wall or facing the floor for example). Using new technologies such as 3D printing it was possible to create a prototype that aims to control the effect of those external factors on the captured image. The results for this experiment are good indicators for future works. Although it’s necessary to do more tests, the first tests had a success rate of 80% to 90% of correct results. This report documents the development of this project and all the techniques and steps required to execute the tests.
Resumo:
Humans can perceive three dimension, our world is three dimensional and it is becoming increasingly digital too. We have the need to capture and preserve our existence in digital means perhaps due to our own mortality. We have also the need to reproduce objects or create small identical objects to prototype, test or study them. Some objects have been lost through time and are only accessible through old photographs. With robust model generation from photographs we can use one of the biggest human data sets and reproduce real world objects digitally and physically with printers. What is the current state of development in three dimensional reconstruction through photographs both in the commercial world and in the open source world? And what tools are available for a developer to build his own reconstruction software? To answer these questions several pieces of software were tested, from full commercial software packages to open source small projects, including libraries aimed at computer vision. To bring to the real world the 3D models a 3D printer was built, tested and analyzed, its problems and weaknesses evaluated. Lastly using a computer vision library a small software with limited capabilities was developed.
Resumo:
Visual attention is a very important task in autonomous robotics, but, because of its complexity, the processing time required is significant. We propose an architecture for feature selection using foveated images that is guided by visual attention tasks and that reduces the processing time required to perform these tasks. Our system can be applied in bottom-up or top-down visual attention. The foveated model determines which scales are to be used on the feature extraction algorithm. The system is able to discard features that are not extremely necessary for the tasks, thus, reducing the processing time. If the fovea is correctly placed, then it is possible to reduce the processing time without compromising the quality of the tasks outputs. The distance of the fovea from the object is also analyzed. If the visual system loses the tracking in top-down attention, basic strategies of fovea placement can be applied. Experiments have shown that it is possible to reduce up to 60% the processing time with this approach. To validate the method, we tested it with the feature algorithm known as Speeded Up Robust Features (SURF), one of the most efficient approaches for feature extraction. With the proposed architecture, we can accomplish real time requirements of robotics vision, mainly to be applied in autonomous robotics
Resumo:
We propose a multi-resolution, coarse-to-fine approach for stereo matching, where the first matching happens at a different depth for each pixel. The proposed technique has the potential of attenuating several problems faced by the constant depth algorithm, making it possible to reduce the number of errors or the number of comparations needed to get equivalent results. Several experiments were performed to demonstrate the method efficiency, including comparison with the traditional plain correlation technique, where the multi-resolution matching with variable depth, proposed here, generated better results with a smaller processing time