317 resultados para techniques: image processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

An algorithm for computing dense correspondences between images of a stereo pair or image sequence is presented. The algorithm can make use of both standard matching metrics and the rank and census filters, two filters based on order statistics which have been applied to the image matching problem. Their advantages include robustness to radiometric distortion and amenability to hardware implementation. Results obtained using both real stereo pairs and a synthetic stereo pair with ground truth were compared. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This project addresses the viability of lightweight, low power consumption, flexible, large format LED screens. The investigation encompasses all aspects of the electrical and mechanical design, individually and as a system, and achieves a successful full scale prototype. The prototype implements novel techniques to achieve large displacement colour aliasing, a purely passive thermal management solution, a rapid deployment system, individual seven bit LED current control with two way display communication, auto-configuration and complete signal redundancy, all of which are in direct response to industry needs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Collisions between pedestrians and vehicles continue to be a major problem throughout the world. Pedestrians trying to cross roads and railway tracks without any caution are often highly susceptible to collisions with vehicles and trains. Continuous financial, human and other losses have prompted transport related organizations to come up with various solutions addressing this issue. However, the quest for new and significant improvements in this area is still ongoing. This work addresses this issue by building a general framework using computer vision techniques to automatically monitor pedestrian movements in such high-risk areas to enable better analysis of activity, and the creation of future alerting strategies. As a result of rapid development in the electronics and semi-conductor industry there is extensive deployment of CCTV cameras in public places to capture video footage. This footage can then be used to analyse crowd activities in those particular places. This work seeks to identify the abnormal behaviour of individuals in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM), Full-2D HMM and Spatial HMM to model the normal activities of people. The outliers of the model (i.e. those observations with insufficient likelihood) are identified as abnormal activities. Location features, flow features and optical flow textures are used as the features for the model. The proposed approaches are evaluated using the publicly available UCSD datasets, and we demonstrate improved performance using a Semi-2D Hidden Markov Model compared to other state of the art methods. Further we illustrate how our proposed methods can be applied to detect anomalous events at rail level crossings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are several methods for determining the proteoglycan content of cartilage in biomechanics experiments. Many of these include assay-based methods and the histochemistry or spectrophotometry protocol where quantification is biochemically determined. More recently a method based on extracting data to quantify proteoglycan content has emerged using the image processing algorithms, e.g., in ImageJ, to process histological micrographs, with advantages including time saving and low cost. However, it is unknown whether or not this image analysis method produces results that are comparable to those obtained from the biochemical methodology. This paper compares the results of a well-established chemical method to those obtained using image analysis to determine the proteoglycan content of visually normal (n=33) and their progressively degraded counterparts with the protocols. The results reveal a strong linear relationship with a regression coefficient (R2) = 0.9928, leading to the conclusion that the image analysis methodology is a viable alternative to the spectrophotometry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clustering identities in a broadcast video is a useful task to aid in video annotation and retrieval. Quality based frame selection is a crucial task in video face clustering, to both improve the clustering performance and reduce the computational cost. We present a frame work that selects the highest quality frames available in a video to cluster the face. This frame selection technique is based on low level and high level features (face symmetry, sharpness, contrast and brightness) to select the highest quality facial images available in a face sequence for clustering. We also consider the temporal distribution of the faces to ensure that selected faces are taken at times distributed throughout the sequence. Normalized feature scores are fused and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face clustering system. We present a news video database to evaluate the clustering system performance. Experiments on the newly created news database show that the proposed method selects the best quality face images in the video sequence, resulting in improved clustering performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fusion techniques can be used in biometrics to achieve higher accuracy. When biometric systems are in operation and the threat level changes, controlling the trade-off between detection error rates can reduce the impact of an attack. In a fused system, varying a single threshold does not allow this to be achieved, but systematic adjustment of a set of parameters does. In this paper, fused decisions from a multi-part, multi-sample sequential architecture are investigated for that purpose in an iris recognition system. A specific implementation of the multi-part architecture is proposed and the effect of the number of parts and samples in the resultant detection error rate is analysed. The effectiveness of the proposed architecture is then evaluated under two specific cases of obfuscation attack: miosis and mydriasis. Results show that robustness to such obfuscation attacks is achieved, since lower error rates than in the case of the non-fused base system are obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For a planetary rover to successfully traverse across unstructured terrain autonomously, one of the major challenges is to assess its local traversability such that it can plan a trajectory to achieve its mission goals efficiently while minimising risk to the vehicle itself. This paper aims to provide a comparative study on different approaches for representing the geometry of Martian terrain for the purpose of evaluating terrain traversability. An accurate representation of the geometric properties of the terrain is essential as it can directly affect the determination of traversability for a ground vehicle. We explore current state-of-the-art techniques for terrain estimation, in particular Gaussian Processes (GP) in various forms, and discuss the suitability of each technique in the context of an unstructured Martian terrain. Furthermore, we present the limitations of regression techniques in terms of spatial correlation and continuity assumptions, and the impact on traversability analysis of a planetary rover across unstructured terrain. The analysis was performed on datasets of the Mars Yard at the Powerhouse Museum in Sydney, obtained using the onboard RGB-D camera.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Camera-laser calibration is necessary for many robotics and computer vision applications. However, existing calibration toolboxes still require laborious effort from the operator in order to achieve reliable and accurate results. This paper proposes algorithms that augment two existing trustful calibration methods with an automatic extraction of the calibration object from the sensor data. The result is a complete procedure that allows for automatic camera-laser calibration. The first stage of the procedure is automatic camera calibration which is useful in its own right for many applications. The chessboard extraction algorithm it provides is shown to outperform openly available techniques. The second stage completes the procedure by providing automatic camera-laser calibration. The procedure has been verified by extensive experimental tests with the proposed algorithms providing a major reduction in time required from an operator in comparison to manual methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Considering the wide spectrum of situations that it may encounter, a robot navigating autonomously in outdoor environments needs to be endowed with several operating modes, for robustness and efficiency reasons. Indeed, the terrain it has to traverse may be composed of flat or rough areas, low cohesive soils such as sand dunes, concrete road etc. . .Traversing these various kinds of environment calls for different navigation and/or locomotion functionalities, especially if the robot is endowed with different locomotion abilities, such as the robots WorkPartner, Hylos [4], Nomad or the Marsokhod rovers. Numerous rover navigation techniques have been proposed, each of them being suited to a particular environment context (e.g. path following, obstacle avoidance in more or less cluttered environments, rough terrain traverses...). However, seldom contributions in the literature tackle the problem of selecting autonomously the most suited mode [3]. Most of the existing work is indeed devoted to the passive analysis of a single navigation mode, as in [2]. Fault detection is of course essential: one can imagine that a proper monitoring of the Mars Exploration Rover Opportunity could have avoided the rover to be stuck during several weeks in a dune, by detecting non-nominal behavior of some parameters. But the ability to recover the anticipated problem by switching to a better suited navigation mode would bring higher autonomy abilities, and therefore a better overall efficiency. We propose here a probabilistic framework to achieve this, which fuses environment related and robot related information in order to actively control the rover operations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is well recognized that many scientifically interesting sites on Mars are located in rough terrains. Therefore, to enable safe autonomous operation of a planetary rover during exploration, the ability to accurately estimate terrain traversability is critical. In particular, this estimate needs to account for terrain deformation, which significantly affects the vehicle attitude and configuration. This paper presents an approach to estimate vehicle configuration, as a measure of traversability, in deformable terrain by learning the correlation between exteroceptive and proprioceptive information in experiments. We first perform traversability estimation with rigid terrain assumptions, then correlate the output with experienced vehicle configuration and terrain deformation using a multi-task Gaussian Process (GP) framework. Experimental validation of the proposed approach was performed on a prototype planetary rover and the vehicle attitude and configuration estimate was compared with state-of-the-art techniques. We demonstrate the ability of the approach to accurately estimate traversability with uncertainty in deformable terrain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many applications can benefit from the accurate surface temperature estimates that can be made using a passive thermal-infrared camera. However, the process of radiometric calibration which enables this can be both expensive and time consuming. An ad hoc approach for performing radiometric calibration is proposed which does not require specialized equipment and can be completed in a fraction of the time of the conventional method. The proposed approach utilizes the mechanical properties of the camera to estimate scene temperatures automatically, and uses these target temperatures to model the effect of sensor temperature on the digital output. A comparison with a conventional approach using a blackbody radiation source shows that the accuracy of the method is sufficient for many tasks requiring temperature estimation. Furthermore, a novel visualization method is proposed for displaying the radiometrically calibrated images to human operators. The representation employs an intuitive coloring scheme and allows the viewer to perceive a large variety of temperatures accurately.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy auditing is an effective but costly approach for reducing the long-term energy consumption of buildings. When well-executed, energy loss can be quickly identified in the building structure and its subsystems. This then presents opportunities for improving energy efficiency. We present a low-cost, portable technology called "HeatWave" which allows non-experts to generate detailed 3D surface temperature models for energy auditing. This handheld 3D thermography system consists of two commercially available imaging sensors and a set of software algorithms which can be run on a laptop. The 3D model can be visualized in real-time by the operator so that they can monitor their degree of coverage as the sensors are used to capture data. In addition, results can be analyzed offline using the proposed "Spectra" multispectral visualization toolbox. The presence of surface temperature data in the generated 3D model enables the operator to easily identify and measure thermal irregularities such as thermal bridges, insulation leaks, moisture build-up and HVAC faults. Moreover, 3D models generated from subsequent audits of the same environment can be automatically compared to detect temporal changes in conditions and energy use over time.