942 resultados para Feature Point Detection
Resumo:
This report presents an algorithm for locating the cut points for and separatingvertically attached traffic signs in Sweden. This algorithm provides severaladvanced digital image processing features: binary image which representsvisual object and its complex rectangle background with number one and zerorespectively, improved cross correlation which shows the similarity of 2Dobjects and filters traffic sign candidates, simplified shape decompositionwhich smoothes contour of visual object iteratively in order to reduce whitenoises, flipping point detection which locates black noises candidates, chasmfilling algorithm which eliminates black noises, determines the final cut pointsand separates originally attached traffic signs into individual ones. At each step,the mediate results as well as the efficiency in practice would be presented toshow the advantages and disadvantages of the developed algorithm. Thisreport concentrates on contour-based recognition of Swedish traffic signs. Thegeneral shapes cover upward triangle, downward triangle, circle, rectangle andoctagon. At last, a demonstration program would be presented to show howthe algorithm works in real-time environment.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper, the concept of Matching Parallelepiped (MP) is presented. It is shown that the volume of the MP can be used as an additional measure of `distance' between a pair of candidate points in a matching algorithm by Relaxation Labeling (RL). The volume of the MP is related with the Epipolar Geometry and the use of this measure works as an epipolar constraint in a RL process, decreasing the efforts in the matching algorithm since it is not necessary to explicitly determine the equations of the epipolar lines and to compute the distance of a candidate point to each epipolar line. As at the beginning of the process the Relative Orientation (RO) parameters are unknown, a initial matching based on gradient, intensities and correlation is obtained. Based on this set of labeled points the RO is determined and the epipolar constraint included in the algorithm. The obtained results shown that the proposed approach is suitable to determine feature-point matching with simultaneous estimation of camera orientation parameters even for the cases where the pair of optical axes are not parallel.
Resumo:
This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice.
Resumo:
This paper discusses the target localization problem of wireless visual sensor networks. Specifically, each node with a low-resolution camera extracts multiple feature points to represent the target at the sensor node level. A statistical method of merging the position information of different sensor nodes to select the most correlated feature point pair at the base station is presented. This method releases the influence of the accuracy of target extraction on the accuracy of target localization in universal coordinate system. Simulations show that, compared with other relative approach, our proposed method can generate more desirable target localization's accuracy, and it has a better trade-off between camera node usage and localization accuracy.
Resumo:
Ce mémoire a pour but de déterminer des nouvelles méthodes de détection de rupture et/ou de tendance. Après une brève introduction théorique sur les splines, plusieurs méthodes de détection de rupture existant déjà dans la littérature seront présentées. Puis, de nouvelles méthodes de détection de rupture qui utilisent les splines et la statistique bayésienne seront présentées. De plus, afin de bien comprendre d’où provient la méthode utilisant la statistique bayésienne, une introduction sur la théorie bayésienne sera présentée. À l’aide de simulations, nous effectuerons une comparaison de la puissance de toutes ces méthodes. Toujours en utilisant des simulations, une analyse plus en profondeur de la nouvelle méthode la plus efficace sera effectuée. Ensuite, celle-ci sera appliquée sur des données réelles. Une brève conclusion fera une récapitulation de ce mémoire.
Resumo:
Ce mémoire a pour but de déterminer des nouvelles méthodes de détection de rupture et/ou de tendance. Après une brève introduction théorique sur les splines, plusieurs méthodes de détection de rupture existant déjà dans la littérature seront présentées. Puis, de nouvelles méthodes de détection de rupture qui utilisent les splines et la statistique bayésienne seront présentées. De plus, afin de bien comprendre d’où provient la méthode utilisant la statistique bayésienne, une introduction sur la théorie bayésienne sera présentée. À l’aide de simulations, nous effectuerons une comparaison de la puissance de toutes ces méthodes. Toujours en utilisant des simulations, une analyse plus en profondeur de la nouvelle méthode la plus efficace sera effectuée. Ensuite, celle-ci sera appliquée sur des données réelles. Une brève conclusion fera une récapitulation de ce mémoire.
Resumo:
Polymer optical fibre (POF) is a relatively new and novel technology that presents an innovative approach for ultrasonic endoscopic applications. Currently, piezo electric transducers are the typical detectors of choice, albeit possessing a limited bandwidth due to their resonant nature and a sensitivity that decreases proportionally to their size. Optical fibres provide immunity from electromagnetic interference and POF in particular boasts more suitable physical characteristics than silica optical fibre. The most important of these are lower acoustic impedance, a reduced Young's Modulus and a higher acoustic sensitivity than single-mode silica fibre at both 1 MHz and 10 MHz. POF therefore offers an interesting alternative to existing technology. Intrinsic fibre structures such as Bragg gratings and Fabry-Perot cavities may be inscribed into the fibre core using UV lasers. These gratings are a modulation of the refractive index of the fibre core and provide the advantages of high reflectivity, customisable bandwidth and point detection. We present a compact in fibre ultrasonic point detector based upon a POF Bragg grating (POFBG) sensor. We demonstrate that the detector is capable of leaving a laboratory environment by using connectorised fibre sensors and make a case for endoscopic ultrasonic detection through use of a mounting structure that better mimics the environment of an endoscopic probe. We measure the effects of water immersion upon POFBGs and analyse the ultrasonic response for 1, 5 and 10 MHz.
Resumo:
A structural time series model is one which is set up in terms of components which have a direct interpretation. In this paper, the discussion focuses on the dynamic modeling procedure based on the state space approach (associated to the Kalman filter), in the context of surface water quality monitoring, in order to analyze and evaluate the temporal evolution of the environmental variables, and thus identify trends or possible changes in water quality (change point detection). The approach is applied to environmental time series: time series of surface water quality variables in a river basin. The statistical modeling procedure is applied to monthly values of physico- chemical variables measured in a network of 8 water monitoring sites over a 15-year period (1999-2014) in the River Ave hydrological basin located in the Northwest region of Portugal.
Resumo:
The analysis of rockfall characteristics and spatial distribution is fundamental to understand and model the main factors that predispose to failure. In our study we analysed LiDAR point clouds aiming to: (1) detect and characterise single rockfalls; (2) investigate their spatial distribution. To this end, different cluster algorithms were applied: 1a) Nearest Neighbour Clutter Removal (NNCR) in combination with the Expectation?Maximization (EM) in order to separate feature points from clutter; 1b) a density based algorithm (DBSCAN) was applied to isolate the single clusters (i.e. the rockfall events); 2) finally we computed the Ripley's K-function to investigate the global spatial pattern of the extracted rockfalls. The method allowed proper identification and characterization of more than 600 rockfalls occurred on a cliff located in Puigcercos (Catalonia, Spain) during a time span of six months. The spatial distribution of these events proved that rockfall were clustered distributed at a welldefined distance-range. Computations were carried out using R free software for statistical computing and graphics. The understanding of the spatial distribution of precursory rockfalls may shed light on the forecasting of future failures.
Resumo:
This paper presents a strategy for solving the feature matching problem in calibrated very wide-baseline camera settings. In this kind of settings, perspective distortion, depth discontinuities and occlusion represent enormous challenges. The proposed strategy addresses them by using geometrical information, specifically by exploiting epipolar-constraints. As a result it provides a sparse number of reliable feature points for which 3D position is accurately recovered. Special features known as junctions are used for robust matching. In particular, a strategy for refinement of junction end-point matching is proposed which enhances usual junction-based approaches. This allows to compute cross-correlation between perfectly aligned plane patches in both images, thus yielding better matching results. Evaluation of experimental results proves the effectiveness of the proposed algorithm in very wide-baseline environments.
Resumo:
Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^
Resumo:
This work describes a novel use for the polymeric film, poly(o-aminophenol) (PAP) that was made responsive to a specific protein. This was achieved through templated electropolymerization of aminophenol (AP) in the presence of protein. The procedure involved adsorbing protein on the electrode surface and thereafter electroploymerizing the aminophenol. Proteins embedded at the outer surface of the polymeric film were digested by proteinase K and then washed away thereby creating vacant sites. The capacity of the template film to specifically rebind protein was tested with myoglobin (Myo), a cardiac biomarker for ischemia. The films acted as biomimetic artificial antibodies and were produced on a gold (Au) screen printed electrode (SPE), as a step towards disposable sensors to enable point-of-care applications. Raman spectroscopy was used to follow the surface modification of the Au-SPE. The ability of the material to rebind Myo was measured by electrochemical techniques, namely electrochemical impedance spectroscopy (EIS) and square wave voltammetry (SWV). The devices displayed linear responses to Myo in EIS and SWV assays down to 4.0 and 3.5 μg/mL, respectively, with detection limits of 1.5 and 0.8 μg/mL. Good selectivity was observed in the presence of troponin T (TnT) and creatine kinase (CKMB) in SWV assays, and accurate results were obtained in applications to spiked serum. The sensor described in this work is a potential tool for screening Myo in point-of-care due to the simplicity of fabrication, disposability, short time response, low cost, good sensitivity and selectivity.
Resumo:
In this paper we propose an endpoint detection system based on the use of several features extracted from each speech frame, followed by a robust classifier (i.e Adaboost and Bagging of decision trees, and a multilayer perceptron) and a finite state automata (FSA). We present results for four different classifiers. The FSA module consisted of a 4-state decision logic that filtered false alarms and false positives. We compare the use of four different classifiers in this task. The look ahead of the method that we propose was of 7 frames, which are the number of frames that maximized the accuracy of the system. The system was tested with real signals recorded inside a car, with signal to noise ratio that ranged from 6 dB to 30dB. Finally we present experimental results demonstrating that the system yields robust endpoint detection.