742 resultados para Photometric stereo
Resumo:
We present preliminary results about the detection of high redshift (U)LIRGs in the Bullet cluster field by the PACS and SPIRE instruments within the Herschel Lensing Survey (HLS) Program. We describe in detail a photometric procedure designed to recover robust fluxes and deblend faint Herschel sources near the confusion noise. The method is based on the use of the positions of Spitzer/MIPS 24 μm sources as priors. Our catalogs are able to reliably (5σ) recover galaxies with fluxes above 6 and 10 mJy in the PACS 100 and 160 μm channels, respectively, and 12 to 18 mJy in the SPIRE bands. We also obtain spectral energy distributions covering the optical through the far-infrared/millimeter spectral ranges of all the Herschel detected sources, and analyze them to obtain independent estimations of the photometric redshift based on either stellar population or dust emission models. We exemplify the potential of the combined use of Spitzer position priors plus independent optical and IR photometric redshifts to robustly assign optical/NIR counterparts to the sources detected by Herschel and other (sub-)mm instruments.
Resumo:
Acquiring 3D shape from images is a classic problem in Computer Vision occupying researchers for at least 20 years. Only recently however have these ideas matured enough to provide highly accurate results. We present a complete algorithm to reconstruct 3D objects from images using the stereo correspondence cue. The technique can be described as a pipeline of four basic building blocks: camera calibration, image segmentation, photo-consistency estimation from images, and surface extraction from photo-consistency. In this Chapter we will put more emphasis on the latter two: namely how to extract geometric information from a set of photographs without explicit camera visibility, and how to combine different geometry estimates in an optimal way. © 2010 Springer-Verlag Berlin Heidelberg.
Resumo:
Scientists planning to use underwater stereoscopic image technologies are often faced with numerous problems during the methodological implementations: commercial equipment is too expensive; the setup or calibration is too complex; or the imaging processing (i.e. measuring objects in the stereo-images) is too complicated to be performed without a time-consuming phase of training and evaluation. The present paper addresses some of these problems and describes a workflow for stereoscopic measurements for marine biologists. It also provides instructions on how to assemble an underwater stereo-photographic system with two digital consumer cameras and gives step-by-step guidelines for setting up the hardware. The second part details a software procedure to correct stereo-image pairs for lens distortions, which is especially important when using cameras with non-calibrated optical units. The final part presents a guide to the process of measuring the lengths (or distances) of objects in stereoscopic image pairs. To reveal the applicability and the restrictions of the described systems and to test the effects of different types of camera (a compact camera and an SLR type), experiments were performed to determine the precision and accuracy of two generic stereo-imaging units: a diver-operated system based on two Olympus Mju 1030SW compact cameras and a cable-connected observatory system based on two Canon 1100D SLR cameras. In the simplest setup without any correction for lens distortion, the low-budget Olympus Mju 1030SW system achieved mean accuracy errors (percentage deviation of a measurement from the object's real size) between 10.2 and -7.6% (overall mean value: -0.6%), depending on the size, orientation and distance of the measured object from the camera. With the single lens reflex (SLR) system, very similar values between 10.1% and -3.4% (overall mean value: -1.2%) were observed. Correction of the lens distortion significantly improved the mean accuracy errors of either system. Even more, system precision (spread of the accuracy) improved significantly in both systems. Neither the use of a wide-angle converter nor multiple reassembly of the system had a significant negative effect on the results. The study shows that underwater stereophotography, independent of the system, has a high potential for robust and non-destructive in situ sampling and can be used without prior specialist training.
Resumo:
We present an extensive photometric catalog for 548 CALIFA galaxies observed as of the summer of 2015. CALIFA is currently lacking photometry matching the scale and diversity of its spectroscopy; this work is intended to meet all photometric needs for CALIFA galaxies while also identifying best photometric practices for upcoming integral field spectroscopy surveys such as SAMI and MaNGA. This catalog comprises gri surface brightness profiles derived from Sloan Digital Sky Survey (SDSS) imaging, a variety of non-parametric quantities extracted from these pro files, and parametric models fitted to the i-band pro files (1D) and original galaxy images (2D). To compliment our photometric analysis, we contrast the relative performance of our 1D and 2D modelling approaches. The ability of each measurement to characterize the global properties of galaxies is quantitatively assessed, in the context of constructing the tightest scaling relations. Where possible, we compare our photometry with existing photometrically or spectroscopically obtained measurements from the literature. Close agreement is found with Walcher et al. (2014), the current source of basic photometry and classifications of CALIFA galaxies, while comparisons with spectroscopically derived quantities reveals the effect of CALIFA's limited field of view compared to broadband imaging surveys such as the SDSS. The colour-magnitude diagram, star formation main sequence, and Tully-Fisher relation of CALIFA galaxies are studied, to give a small example of the investigations possible with this rich catalog. We conclude with a discussion of points of concern for ongoing integral field spectroscopy surveys and directions for future expansion and exploitation of this work.
Resumo:
Applicazione di algoritmi di stereo visione con differenti configurazioni con lo scopo di confrontare e valutare quale applicare ad una successiva implementazione su FPGA.
Resumo:
Tesi riguardante le metodologie di aggregazione di costi applicate alla visione stereo, incentrata in particolare sull'algoritmo box filtering.
Resumo:
Ricavare informazioni dalla realtà circostante è un obiettivo molto importante dell'informatica moderna, in modo da poter progettare robot, veicoli a guida autonoma, sistemi di riconoscimento e tanto altro. La computer vision è la parte dell'informatica che se ne occupa e sta sempre più prendendo piede. Per raggiungere tale obiettivo si utilizza una pipeline di visione stereo i cui passi di rettificazione e generazione di mappa di disparità sono oggetto di questa tesi. In particolare visto che questi passi sono spesso affidati a dispositivi hardware dedicati (come le FPGA) allora si ha la necessità di utilizzare algoritmi che siano portabili su questo tipo di tecnologia, dove le risorse sono molto minori. Questa tesi mostra come sia possibile utilizzare tecniche di approssimazione di questi algoritmi in modo da risparmiare risorse ma che che garantiscano comunque ottimi risultati.
Resumo:
Purpose: Stereopsis is the perception of depth based on retinal disparity. Global stereopsis depends on the process of random dot stimuli and local stereopsis depends on contour perception. The aim of this study was to correlate 3 stereopsis tests: TNO®, StereoTA B®, and Fly Stereo Acuity Test® and to study the sensitivity and correlation between them, using TNO® as the gold standard. Other variables as near convergence point, vergences, symptoms and optical correction were correlated with the 3 tests. Materials and Methods: Forty-nine students from Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL), aged 18-26 years old were included. Results: The stereopsis mean (standard-deviation-SD) values in each test were: TNO® = 87.04” ±84.09”; FlyTest® = 38.18” ±34.59”; StereoTA B® = 124.89’’ ±137.38’’. About the coefficient of determination: TNO® and StereoTA B® with R2 = 0.6 e TNO® and FlyTest® with R2 =0.2. Pearson correlation coefficient shows a positive correlation between TNO® and StereoTA B® (r = 0.784 with α = 0.01). Phi coefficient shows a strong and positive association between TNO® and StereoTA B® (Φ = 0.848 with α = 0.01). In the ROC Curve, the StereoTA B® has an area under the curve bigger than the FlyTest® with a sensivity of 92.3% for 94.4% of specificity, so it means that the test is sensitive with a good discriminative power. Conclusion: We conclude that the use of Stereopsis tests to study global Stereopsis are an asset for clinical use. This type of test is more sensitive, revealing changes in Stereopsis when it is actually changed, unlike the test Stereopsis, which often indicates normal Stereopsis, camouflaging a Stereopsis change. We noted also that the StereoTA B ® is very sensitive and despite being a digital application, possessed good correlation with the TNO®.
Resumo:
Estereopsia define-se como a perceção de profundidade baseada na disparidade retiniana. A estereopsia global depende do processamento de estímulos de pontos aleatórios e a estereopsia local depende da perceção de contornos. O objetivo deste estudo é correlacionar três testes de estereopsia: TNO®, StereoTAB® e Fly Stereo Acuity Test® e verificar a sensibilidade e correlação entre eles, tendo o TNO® como gold standard. Incluíram-se 49 estudantes da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) entre os 18 e 26 anos. As variáveis ponto próximo de convergência (ppc), vergências, sintomatologia e correção ótica foram correlacionadas com os três testes. Os valores médios (desvios-padrão) de estereopsia foram: TNO® = 87,04’’ ±84,09’’; FlyTest® = 38,18’’ ±34,59’’; StereoTAB® = 124,89’’ ±137,38’’. Coeficiente de determinação: TNO® e StereoTAB® com R2=0,6 e TNO® e FlyTest® com R2=0,2. O coeficiente de correlação de Pearson mostra uma correlação positiva de entre o TNO® e o StereoTAB® (r=0,784 com α=0,01). O coeficiente de associação de Phi mostrou uma relação positiva forte entre o TNO® e StereoTAB® (Φ=0,848 com α=0,01). Na curva ROC, o StereoTAB® possui uma área sob a curva maior que o FlyTest®, apresentando valor de sensibilidade de 92,3% para uma especificidade de 94,4%, tornando-o num teste sensível e com bom poder discriminativo.
Resumo:
In this thesis, we propose several advances in the numerical and computational algorithms that are used to determine tomographic estimates of physical parameters in the solar corona. We focus on methods for both global dynamic estimation of the coronal electron density and estimation of local transient phenomena, such as coronal mass ejections, from empirical observations acquired by instruments onboard the STEREO spacecraft. We present a first look at tomographic reconstructions of the solar corona from multiple points-of-view, which motivates the developments in this thesis. In particular, we propose a method for linear equality constrained state estimation that leads toward more physical global dynamic solar tomography estimates. We also present a formulation of the local static estimation problem, i.e., the tomographic estimation of local events and structures like coronal mass ejections, that couples the tomographic imaging problem to a phase field based level set method. This formulation will render feasible the 3D tomography of coronal mass ejections from limited observations. Finally, we develop a scalable algorithm for ray tracing dense meshes, which allows efficient computation of many of the tomographic projection matrices needed for the applications in this thesis.
Resumo:
Most approaches to stereo visual odometry reconstruct the motion based on the tracking of point features along a sequence of images. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these algorithms deteriorates. This paper proposes a probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features. In order to effectively combine both types of features, their associated errors are weighted according to their covariance matrices, computed from the propagation of Gaussian distribution errors in the sensor measurements. The method, of course, is computationally more expensive that using only one type of feature, but still can run in real-time on a standard computer and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.
Resumo:
Clouds are important in weather prediction, climate studies and aviation safety. Important parameters include cloud height, type and cover percentage. In this paper, the recent improvements in the development of a low-cost cloud height measurement setup are described. It is based on stereo vision with consumer digital cameras. The cameras positioning is calibrated using the position of stars in the night sky. An experimental uncertainty analysis of the calibration parameters is performed. Cloud height measurement results are presented and compared with LIDAR measurements.
Resumo:
This work aims to develop a neurogeometric model of stereo vision, based on cortical architectures involved in the problem of 3D perception and neural mechanisms generated by retinal disparities. First, we provide a sub-Riemannian geometry for stereo vision, inspired by the work on the stereo problem by Zucker (2006), and using sub-Riemannian tools introduced by Citti-Sarti (2006) for monocular vision. We present a mathematical interpretation of the neural mechanisms underlying the behavior of binocular cells, that integrate monocular inputs. The natural compatibility between stereo geometry and neurophysiological models shows that these binocular cells are sensitive to position and orientation. Therefore, we model their action in the space R3xS2 equipped with a sub-Riemannian metric. Integral curves of the sub-Riemannian structure model neural connectivity and can be related to the 3D analog of the psychophysical association fields for the 3D process of regular contour formation. Then, we identify 3D perceptual units in the visual scene: they emerge as a consequence of the random cortico-cortical connection of binocular cells. Considering an opportune stochastic version of the integral curves, we generate a family of kernels. These kernels represent the probability of interaction between binocular cells, and they are implemented as facilitation patterns to define the evolution in time of neural population activity at a point. This activity is usually modeled through a mean field equation: steady stable solutions lead to consider the associated eigenvalue problem. We show that three-dimensional perceptual units naturally arise from the discrete version of the eigenvalue problem associated to the integro-differential equation of the population activity.
Resumo:
Nell’ambito della Stereo Vision, settore della Computer Vision, partendo da coppie di immagini RGB, si cerca di ricostruire la profondità della scena. La maggior parte degli algoritmi utilizzati per questo compito ipotizzano che tutte le superfici presenti nella scena siano lambertiane. Quando sono presenti superfici non lambertiane (riflettenti o trasparenti), gli algoritmi stereo esistenti sbagliano la predizione della profondità. Per risolvere questo problema, durante l’esperienza di tirocinio, si è realizzato un dataset contenente oggetti trasparenti e riflettenti che sono la base per l’allenamento della rete. Agli oggetti presenti nelle scene sono associate annotazioni 3D usate per allenare la rete. Invece, nel seguente lavoro di tesi, utilizzando l’algoritmo RAFT-Stereo [1], rete allo stato dell’arte per la stereo vision, si analizza come la rete modifica le sue prestazioni (predizione della disparità) se al suo interno viene inserito un modulo per la segmentazione semantica degli oggetti. Si introduce questo layer aggiuntivo perché, trovare la corrispondenza tra due punti appartenenti a superfici lambertiane, risulta essere molto complesso per una normale rete. Si vuole utilizzare l’informazione semantica per riconoscere questi tipi di superfici e così migliorarne la disparità. È stata scelta questa architettura neurale in quanto, durante l’esperienza di tirocinio riguardante la creazione del dataset Booster [2], è risultata la migliore su questo dataset. L’obiettivo ultimo di questo lavoro è vedere se il riconoscimento di superfici non lambertiane, da parte del modulo semantico, influenza la predizione della disparità migliorandola. Nell’ambito della stereo vision, gli elementi riflettenti e trasparenti risultano estremamente complessi da analizzare, ma restano tuttora oggetto di studio dati gli svariati settori di applicazione come la guida autonoma e la robotica.
Resumo:
In order to estimate depth through supervised deep learning-based stereo methods, it is necessary to have access to precise ground truth depth data. While the gathering of precise labels is commonly tackled by deploying depth sensors, this is not always a viable solution. For instance, in many applications in the biomedical domain, the choice of sensors capable of sensing depth at small distances with high precision on difficult surfaces (that present non-Lambertian properties) is very limited. It is therefore necessary to find alternative techniques to gather ground truth data without having to rely on external sensors. In this thesis, two different approaches have been tested to produce supervision data for biomedical images. The first aims to obtain input stereo image pairs and disparities through simulation in a virtual environment, while the second relies on a non-learned disparity estimation algorithm in order to produce noisy disparities, which are then filtered by means of hand-crafted confidence measures to create noisy labels for a subset of pixels. Among the two, the second approach, which is referred in literature as proxy-labeling, has shown the best results and has even outperformed the non-learned disparity estimation algorithm used for supervision.