942 resultados para Feature Point Detection
Resumo:
Freehand sketching is both a natural and crucial part of design, yet is unsupported by current design automation software. We are working to combine the flexibility and ease of use of paper and pencil with the processing power of a computer to produce a design environment that feels as natural as paper, yet is considerably smarter. One of the most basic steps in accomplishing this is converting the original digitized pen strokes in the sketch into the intended geometric objects using feature point detection and approximation. We demonstrate how multiple sources of information can be combined for feature detection in strokes and apply this technique using two approaches to signal processing, one using simple average based thresholding and a second using scale space.
Resumo:
The identification, tracking, and statistical analysis of tropical convective complexes using satellite imagery is explored in the context of identifying feature points suitable for tracking. The feature points are determined based on the shape of complexes using the distance transform technique. This approach has been applied to the determination feature points for tropical convective complexes identified in a time series of global cloud imagery. The feature points are used to track the complexes, and from the tracks statistical diagnostic fields are computed. This approach allows the nature and distribution of organized deep convection in the Tropics to be explored.
Resumo:
In this study, the Schwarz Information Criterion (SIC) is applied in order to detect change-points in the time series of surface water quality variables. The application of change-point analysis allowed detecting change-points in both the mean and the variance in series under study. Time variations in environmental data are complex and they can hinder the identification of the so-called change-points when traditional models are applied to this type of problems. The assumptions of normality and uncorrelation are not present in some time series, and so, a simulation study is carried out in order to evaluate the methodology’s performance when applied to non-normal data and/or with time correlation.
Resumo:
This thesis is about detection of local image features. The research topic belongs to the wider area of object detection, which is a machine vision and pattern recognition problem where an object must be detected (located) in an image. State-of-the-art object detection methods often divide the problem into separate interest point detection and local image description steps, but in this thesis a different technique is used, leading to higher quality image features which enable more precise localization. Instead of using interest point detection the landmark positions are marked manually. Therefore, the quality of the image features is not limited by the interest point detection phase and the learning of image features is simplified. The approach combines both interest point detection and local description into one phase for detection. Computational efficiency of the descriptor is therefore important, leaving out many of the commonly used descriptors as unsuitably heavy. Multiresolution Gabor features has been the main descriptor in this thesis and improving their efficiency is a significant part. Actual image features are formed from descriptors by using a classifierwhich can then recognize similar looking patches in new images. The main classifier is based on Gaussian mixture models. Classifiers are used in one-class classifier configuration where there are only positive training samples without explicit background class. The local image feature detection method has been tested with two freely available face detection databases and a proprietary license plate database. The localization performance was very good in these experiments. Other applications applying the same under-lying techniques are also presented, including object categorization and fault detection.
Resumo:
Background: Microbiological diagnostic procedures have changed significantly over the last decade. Initially the implementation of the polymerase chain reaction (PCR) resulted in improved detection tests for microbes that were difficult or even impossible to detect by conventional methods such as culture and serology, especially in community-acquired respiratory tract infections (CA-RTI). A further improvement was the development of real-time PCR, which allows end point detection and quantification, and many diagnostic laboratories have now implemented this powerful method. Objective: At present, new performant and convenient molecular tests have emerged targeting in parallel many viruses and bacteria responsible for lower and/or upper respiratory tract infections. The range of test formats and microbial agents detected is evolving very quickly and the added value of these new tests needs to be studied in terms of better use of antibiotics, better patient management, duration of hospitalization and overall costs. Conclusions: Molecular tools for a better microbial documentation of CA-RTI are now available. Controlled studies are now required to address the relevance issue of these new methods, such as, for example, the role of some newly detected respiratory viruses or of the microbial DNA load in a particular patient at a particular time. The future challenge for molecular diagnosis will be to become easy to handle, highly efficient and cost-effective, delivering rapid results with a direct impact on clinical management.
Resumo:
The extension of traditional data mining methods to time series has been effectively applied to a wide range of domains such as finance, econometrics, biology, security, and medicine. Many existing mining methods deal with the task of change points detection, but very few provide a flexible approach. Querying specific change points with linguistic variables is particularly useful in crime analysis, where intuitive, understandable, and appropriate detection of changes can significantly improve the allocation of resources for timely and concise operations. In this paper, we propose an on-line method for detecting and querying change points in crime-related time series with the use of a meaningful representation and a fuzzy inference system. Change points detection is based on a shape space representation, and linguistic terms describing geometric properties of the change points are used to express queries, offering the advantage of intuitiveness and flexibility. An empirical evaluation is first conducted on a crime data set to confirm the validity of the proposed method and then on a financial data set to test its general applicability. A comparison to a similar change-point detection algorithm and a sensitivity analysis are also conducted. Results show that the method is able to accurately detect change points at very low computational costs. More broadly, the detection of specific change points within time series of virtually any domain is made more intuitive and more understandable, even for experts not related to data mining.
Resumo:
In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.
Resumo:
The number and grade of injured neuroanatomic structures and the type of injury determine the degree of impairment after a brain injury event and the recovery options of the patient. However, the body of knowledge and clinical intervention guides are basically focused on functional disorder and they still do not take into account the location of injuries. The prognostic value of location information is not known in detail either. This paper proposes a feature-based detection algorithm, named Neuroanatomic-Based Detection Algorithm (NBDA), based on SURF (Speeded Up Robust Feature) to label anatomical brain structures on cortical and sub-cortical areas. Themain goal is to register injured neuroanatomic structures to generate a database containing patient?s structural impairment profile. This kind of information permits to establish a relation with functional disorders and the prognostic evolution during neurorehabilitation procedures.
Resumo:
This paper discusses the target localization problem in wireless visual sensor networks. Additive noises and measurement errors will affect the accuracy of target localization when the visual nodes are equipped with low-resolution cameras. In the goal of improving the accuracy of target localization without prior knowledge of the target, each node extracts multiple feature points from images to represent the target at the sensor node level. A statistical method is presented to match the most correlated feature point pair for merging the position information of different sensor nodes at the base station. Besides, in the case that more than one target exists in the field of interest, a scheme for locating multiple targets is provided. Simulation results show that, our proposed method has desirable performance in improving the accuracy of locating single target or multiple targets. Results also show that the proposed method has a better trade-off between camera node usage and localization accuracy.
Resumo:
One of the most significant research topics in computer vision is object detection. Most of the reported object detection results localise the detected object within a bounding box, but do not explicitly label the edge contours of the object. Since object contours provide a fundamental diagnostic of object shape, some researchers have initiated work on linear contour feature representations for object detection and localisation. However, linear contour feature-based localisation is highly dependent on the performance of linear contour detection within natural images, and this can be perturbed significantly by a cluttered background. In addition, the conventional approach to achieving rotation-invariant features is to rotate the feature receptive field to align with the local dominant orientation before computing the feature representation. Grid resampling after rotation adds extra computational cost and increases the total time consumption for computing the feature descriptor. Though it is not an expensive process if using current computers, it is appreciated that if each step of the implementation is faster to compute especially when the number of local features is increasing and the application is implemented on resource limited ”smart devices”, such as mobile phones, in real-time. Motivated by the above issues, a 2D object localisation system is proposed in this thesis that matches features of edge contour points, which is an alternative method that takes advantage of the shape information for object localisation. This is inspired by edge contour points comprising the basic components of shape contours. In addition, edge point detection is usually simpler to achieve than linear edge contour detection. Therefore, the proposed localization system could avoid the need for linear contour detection and reduce the pathological disruption from the image background. Moreover, since natural images usually comprise many more edge contour points than interest points (i.e. corner points), we also propose new methods to generate rotation-invariant local feature descriptors without pre-rotating the feature receptive field to improve the computational efficiency of the whole system. In detail, the 2D object localisation system is achieved by matching edge contour points features in a constrained search area based on the initial pose-estimate produced by a prior object detection process. The local feature descriptor obtains rotation invariance by making use of rotational symmetry of the hexagonal structure. Therefore, a set of local feature descriptors is proposed based on the hierarchically hexagonal grouping structure. Ultimately, the 2D object localisation system achieves a very promising performance based on matching the proposed features of edge contour points with the mean correct labelling rate of the edge contour points 0.8654 and the mean false labelling rate 0.0314 applied on the data from Amsterdam Library of Object Images (ALOI). Furthermore, the proposed descriptors are evaluated by comparing to the state-of-the-art descriptors and achieve competitive performances in terms of pose estimate with around half-pixel pose error.
Resumo:
Em estudos de acessibilidade, e não só, são muito úteis um tipo de estruturas que se podem obter a partir de uma rede, eventualmente multi-modal e parametrizável: as chamadas “áreas de serviço”, as quais são constituídas por polígonos, cada qual correspondente a uma zona situada entre um certo intervalo de custo, relativamente a uma certa “feature” (ponto, multiponto, etc.). Pretende-se neste estudo obter, a partir de áreas de serviço relativas a um universo de features, áreas de serviço relativas a subconjuntos dessas features. Estas técnicas envolvem manipulações relativamente complexas de polígonos e podem ser generalizadas para conjuntos de conjuntos e assim sucessivamente. Convém notar que nem sempre se dispõe da rede, podendo dispor-se das referidas estruturas; eventualmente, no caso de áreas de serviço, sob a forma de imagens (raster) a serem convertidas para formato vectorial.
Resumo:
En aquest article presentem l'aplicació d'un algoritme inspirat en el còrtex cerebral anomenat HMAX com a part central d'un procés de detecció automàtica de punts facials característics en imatges.
Resumo:
'Cachaça' is the Brazilian name for the spirit obtained from sugarcane. According to Brazilian regulations, it may be sold raw or with addition of sugar and may contain up to 5 mg/L of copper. Copper in "cachaça" was determined by titration with EDTA, using a homemade copper membrane electrode for end-point detection. It was found a pooled standard deviation of 0,057 mg/L and there was no significant difference between the results obtained by the potentiometric method and by flame atomic absorption spectrometry with standard addition. Among the 21 'cachaça' samples from 16 different brands analyzed, three overpassed the legal copper limit. For its characteristics of accuracy, precision, and speed, the potentiometric method may be employed advantageously in routine analysis, specially when low cost is a major concern.
Resumo:
A new titrimetric method for the determination of phosphite in fertilizer samples, based on reaction of H3PO3 with standard iodine solution in neutral media, is proposed. Diluted samples containing ca. 0.4% m/v P2O5 are heated and titrated with 0.05 mol L-1 iodine standard until the solution becomes faint yellow. Back titration is also feasible: a slight excess of titrant is added followed by starch indicator and titration is completed taking as the end point the change in color from blue to colorless. The influence of chemical composition and pH of buffers, temperature and foreign species on waiting time and end-point detection were investigated. For the Na2HPO4/NaH2PO4 buffer (pH 6.8) at 70 °C, the titration time was 10 min, corresponding to about 127 mg iodine, 200 mg KI and 174 mg Na2HPO4 and 176 mg NaH2PO4 consumed per determination. Accuracy was checked for phosphite determination in seven fertilizer samples. Results obtained by the proposed procedure were in agreement with those obtained by spectrophotometry at 95% confidence level. The R.S.D. (n=10) for direct and back titration was 0.4% and 1.3% respectively.
Resumo:
Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points.