954 resultados para Scale Invariant Features Transform (SIFT)
Resumo:
We study resonant pair production of heavy particles in fully hadronic final states by means of jet substructure techniques. We propose a new resonance tagging strategy that smoothly interpolates between the highly boosted and fully resolved regimes, leading to uniform signal efficiencies and background rejection rates across a broad range of masses. Our method makes it possible to efficiently replace independent experimental searches, based on different final state topologies, with a single common analysis. As a case study, we apply our technique to pair production of Higgs bosons decaying into b\overline{b} pairs in generic New Physics scenarios. We adopt as benchmark models radion and massive KK graviton production in warped extra dimensions. We find that despite the overwhelming QCD background, the 4b final state has enough sensitivity to provide a complementary handle in searches for enhanced Higgs pair production at the LHC. © 2013 SISSA.
Resumo:
Acknowledgements James J. Waggitt was funded by a NERC Case studentship supported by OpenHydro Ltd and Marine Scotland Science (NE/J500148/1). Vessel-based transects were funded by a NERC (NE/J004340/1) and a Scottish National Heritage (SNH) grant. FVCOM modelling was funded by a NERC grant (NE/J004316/1). Marine Scotland Science provided time on the FRV Alba-na-Mara as part as the Marine Collaboration Research Forum (MarCRF). The bathymetry data used in hydrodynamic models (HI 1122 Sanday Sound to Westray Firth) was collected by the Maritime & Coastguard Agency (MCA) as part of the UK Civil Hydrography Programme. We wish to thank Christina Bristow, Matthew Finn and Jennifer Norris at the European Marine Energy Centre (EMEC); Marianna Chimienti, Ciaran Cronin, Tim Sykes and Stuart Thomas for performing vessel-based transects; Marine Scotland Science staff Eric Armstrong, Ian Davies, Mike Robertson, Robert Watret and Michael Stewart for their assistance; Shaun Fraser, Pauline Goulet, Alex Robbins, Helen Wade and Jared Wilson for invaluable discussions; Thomas Cornulier, Alex Douglas, James Grecian and Samantha Patrick for their help with statistical analysis; and Gavin Siriwardena, Leigh Torres, Mark Whittingham and Russell Wynn for their constructive comments on earlier versions of this manuscript.
Resumo:
Este trabalho aborda o problema de casamento entre duas imagens. Casamento de imagens pode ser do tipo casamento de modelos (template matching) ou casamento de pontos-chaves (keypoint matching). Estes algoritmos localizam uma região da primeira imagem numa segunda imagem. Nosso grupo desenvolveu dois algoritmos de casamento de modelos invariante por rotação, escala e translação denominados Ciratefi (Circula, radial and template matchings filter) e Forapro (Fourier coefficients of radial and circular projection). As características positivas destes algoritmos são a invariância a mudanças de brilho/contraste e robustez a padrões repetitivos. Na primeira parte desta tese, tornamos Ciratefi invariante a transformações afins, obtendo Aciratefi (Affine-ciratefi). Construímos um banco de imagens para comparar este algoritmo com Asift (Affine-scale invariant feature transform) e Aforapro (Affine-forapro). Asift é considerado atualmente o melhor algoritmo de casamento de imagens invariante afim, e Aforapro foi proposto em nossa dissertação de mestrado. Nossos resultados sugerem que Aciratefi supera Asift na presença combinada de padrões repetitivos, mudanças de brilho/contraste e mudanças de pontos de vista. Na segunda parte desta tese, construímos um algoritmo para filtrar casamentos de pontos-chaves, baseado num conceito que denominamos de coerência geométrica. Aplicamos esta filtragem no bem-conhecido algoritmo Sift (scale invariant feature transform), base do Asift. Avaliamos a nossa proposta no banco de imagens de Mikolajczyk. As taxas de erro obtidas são significativamente menores que as do Sift original.
Resumo:
We introduce a new second-order method of texture analysis called Adaptive Multi-Scale Grey Level Co-occurrence Matrix (AMSGLCM), based on the well-known Grey Level Co-occurrence Matrix (GLCM) method. The method deviates significantly from GLCM in that features are extracted, not via a fixed 2D weighting function of co-occurrence matrix elements, but by a variable summation of matrix elements in 3D localized neighborhoods. We subsequently present a new methodology for extracting optimized, highly discriminant features from these localized areas using adaptive Gaussian weighting functions. Genetic Algorithm (GA) optimization is used to produce a set of features whose classification worth is evaluated by discriminatory power and feature correlation considerations. We critically appraised the performance of our method and GLCM in pairwise classification of images from visually similar texture classes, captured from Markov Random Field (MRF) synthesized, natural, and biological origins. In these cross-validated classification trials, our method demonstrated significant benefits over GLCM, including increased feature discriminatory power, automatic feature adaptability, and significantly improved classification performance.
Resumo:
Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points.
Resumo:
Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.
Resumo:
Knowledge of the reflectivity of the sediment-covered seabed is of significant importance to marine seismic data acquisition and interpretation as it governs the generation of reverberations in the water layer. In this context pertinent, but largely unresolved, questions concern the importance of the typically very prominent vertical seismic velocity gradients as well as the potential presence and magnitude of anisotropy in soft surficial seabed sediments. To address these issues, we explore the seismic properties of granulometric end-member-type clastic sedimentary seabed models consisting of sand, silt, and clay as well as scale-invariant stochastic layer sequences of these components characterized by realistic vertical gradients of the P- and S-wave velocities. Using effective media theory, we then assess the nature and magnitude of seismic anisotropy associated with these models. Our results indicate that anisotropy is rather benign for P-waves, and that the S-wave velocities in the axial directions differ only slightly. Because of the very high P- to S-wave velocity ratios in the vicinity of the seabed our models nevertheless suggest that S-wave triplications may occur at very small incidence angles. To numerically evaluate the P-wave reflection coefficient of our seabed models, we apply a frequency-slowness technique to the corresponding synthetic seismic wavefields. Comparison with analytical plane-wave reflection coefficients calculated for corresponding isotropic elastic half-space models shows that the differences tend to be most pronounced in the vicinity of the elastic equivalent of the critical angle as well as in the post-critical range. We also find that the presence of intrinsic anisotropy in the clay component of our layered models tends to dramatically reduce the overall magnitude of the P-wave reflection coefficient as well as its variation with incidence angle.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.
Resumo:
We examine the scale invariants in the preparation of highly concentrated w/o emulsions at different scales and in varying conditions. The emulsions are characterized using rheological parameters, owing to their highly elastic behavior. We first construct and validate empirical models to describe the rheological properties. These models yield a reasonable prediction of experimental data. We then build an empirical scale-up model, to predict the preparation and composition conditions that have to be kept constant at each scale to prepare the same emulsion. For this purpose, three preparation scales with geometric similarity are used. The parameter N¿D^α, as a function of the stirring rate N, the scale (D, impeller diameter) and the exponent α (calculated empirically from the regression of all the experiments in the three scales), is defined as the scale invariant that needs to be optimized, once the dispersed phase of the emulsion, the surfactant concentration, and the dispersed phase addition time are set. As far as we know, no other study has obtained a scale invariant factor N¿Dα for the preparation of highly concentrated emulsions prepared at three different scales, which covers all three scales, different addition times and surfactant concentrations. The power law exponent obtained seems to indicate that the scale-up criterion for this system is the power input per unit volume (P/V).
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
The triggering of convective orographic rainbands by small-scale topographic features is investigated through observations of a banded precipitation event over the Oregon Coastal Range and simulations using a cloud-resolving numerical model. A quasi-idealized simulation of the observed event reproduces the bands in the radar observations, indicating the model’s ability to capture the physics of the band-formation process. Additional idealized simulations reinforce that the bands are triggered by lee waves past small-scale topographic obstacles just upstream of the nominal leading edge of the orographic cloud. Whether a topographic obstacle in this region is able to trigger a strong rainband depends on the phase of its lee wave at cloud entry. Convective growth only occurs downstream of obstacles that give rise to lee-wave-induced displacements that create positive vertical velocity anomalies w_c and nearly zero buoyancy anomalies b_c as air parcels undergo saturation. This relationship is quantified through a simple analytic condition involving w_c, b_c, and the static stability N_m^2 of the cloud mass. Once convection is triggered, horizontal buoyancy gradients in the cross-flow direction generate circulations that align the bands parallel to the flow direction.
Resumo:
Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.
Resumo:
Objective Psychiatric comorbidity is the rule in obsessive-compulsive disorder (OCD); however, very few studies have evaluated the clinical characteristics of patients with no co-occurring disorders (non-comorbid or pure OCD). The aim of this study was to estimate the prevalence of pure cases in a large multicenter sample of OCD patients and compare the sociodemographic and clinical characteristics of individuals with and without any lifetime axis I comorbidity. Method A cross-sectional study with 955 adult patients of the Brazilian Research Consortium on Obsessive-Compulsive Spectrum Disorders (C-TOC). Assessment instruments included the Yale-Brown Obsessive-Compulsive Scale, the Dimensional Yale-Brown Obsessive-Compulsive Scale, The USP-Sensory Phenomena Scale and the Brown Assessment of Beliefs Scale. Comorbidities were evaluated using the Structured Clinical Interview for DSM-IV Axis I Disorders. Bivariate analyses were followed by logistic regression. Results Only 74 patients (7.7%) presented pure OCD. Compared with those presenting at least one lifetime comorbidity (881, 92.3%), non-comorbid patients were more likely to be female and to be working, reported less traumatic experiences and presented lower scores in the Y-BOCS obsession subscale and in total DY-BOCS scores. All symptom dimensions except contamination-cleaning and hoarding were less severe in non-comorbid patients. They also presented less severe depression and anxiety, lower suicidality and less previous treatments. In the logistic regression, the following variables predicted pure OCD: sex, severity of depressive and anxious symptoms, previous suicidal thoughts and psychotherapy. Conclusions Pure OCD patients were the minority in this large sample and were characterized by female sex, less severe depressive and anxious symptoms, less suicidal thoughts and less use of psychotherapy as a treatment modality. The implications of these findings for clinical practice are discussed. © 2013 Elsevier Inc. All rights reserved.
Resumo:
The human visual system is able to effortlessly integrate local features to form our rich perception of patterns, despite the fact that visual information is discretely sampled by the retina and cortex. By using a novel perturbation technique, we show that the mechanisms by which features are integrated into coherent percepts are scale-invariant and nonlinear (phase and contrast polarity independent). They appear to operate by assigning position labels or “place tags” to each feature. Specifically, in the first series of experiments, we show that the positional tolerance of these place tags in foveal, and peripheral vision is about half the separation of the features, suggesting that the neural mechanisms that bind features into forms are quite robust to topographical jitter. In the second series of experiment, we asked how many stimulus samples are required for pattern identification by human and ideal observers. In human foveal vision, only about half the features are needed for reliable pattern interpolation. In this regard, human vision is quite efficient (ratio of ideal to real ≈ 0.75). Peripheral vision, on the other hand is rather inefficient, requiring more features, suggesting that the stimulus may be relatively underrepresented at the stage of feature integration.