36 resultados para Photometric stereo
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Changes in the angle of illumination incident upon a 3D surface texture can significantly alter its appearance, implying variations in the image texture. These texture variations produce displacements of class members in the feature space, increasing the failure rates of texture classifiers. To avoid this problem, a model-based texture recognition system which classifies textures seen from different distances and under different illumination directions is presented in this paper. The system works on the basis of a surface model obtained by means of 4-source colour photometric stereo, used to generate 2D image textures under different illumination directions. The recognition system combines coocurrence matrices for feature extraction with a Nearest Neighbour classifier. Moreover, the recognition allows one to guess the approximate direction of the illumination used to capture the test image
Resumo:
In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.
Resumo:
We present a set of photometric data concerning two distant clusters of galaxies: Cl 1613+3104 (z=0.415) and Cl 1600+4109 (z=0.540). The photometric survey extends to a field of about 4' x 3'. It was performed in 3 filters: Johnson B, and Thuan-Gunn g and r. The sample includes 679 objects in the field of Cl 1613+3104 and 334 objects in Cl 1600+4109.
Resumo:
Spectroscopic and photometric observations in a 6 arcmin x 6 arcmin field centered on the rich cluster of galaxies Abell 2390 are presented. The photometry concerns 700 objects and the spectroscopy 72 objects. The redshift survey shows that the mean redshift of the cluster is 0.232. An original method for automatic determination of the spectral type of galaxies is presented.
Resumo:
Results are presented on Stromgren-Crawford uvby-beta photometry carried out at Calar Alto (Spain) for 45 stars in the Cepheus OB3 region covering an apparent area of 6 deg x 6 deg. The relationship of these stars with the association is examined. Three of these stars (BD +62 deg 2114, BD +62 deg 2152, and BD +62 deg 2156) are identified as members of this association, while two more (BD +64 deg 1714 and BD +64 deg 1717) are classified as possible members. It is noted that intrinsic colors and absolute magnitudes of member stars are consistent with the hypothesis of Blaauw (1964) and Garmany (1973) of the existence of two subgroups with different evolutionary phases.
Resumo:
We present new photometric and spectroscopic observations of objects in the field of the cluster of galaxies Abell 2218. The photometric survey, centered on the cluster core, extends to a field of about 4 x 4 arcmin. It was performed in 5 bands (B,g,r,i and z filters). This sample, which includes 729 objects, is about three times larger than the survey made by Butcher and collaborators (Butcher et al., 1983, Butcher and Oemler, 1984) in the same central region of the field. Only 228 objects appear in both catalogues since our survey covers a smaller region. The spectral range covered by our filters is wider and the photometry is much deeper, up to magnitude 27 in r. The spectroscopic survey concerns 66 objects, on a field comparable to that of Butcher and collaborators. From our observations we calculate the mean redshift of the cluster, 0.1756, and its velocity dispersion, 1370 km/s. The spectral types are determined for many galaxies in the sample by comparing their spectra with synthetic ones from Rocca-Volmerange and Guiderdoni (1988).
Resumo:
We present new optical and infrared photometric observations and high resolution H α spectra of the periodic radio star LSI+61◦303. The optical photometric data set covers the time interval 1985-1993 and amounts to about a hundred nights. A period of ∼26 days is found in the V band. The infrared data also present evidence for a similar periodicity, but with higher amplitude of variation ((0.m 2). The spectroscopic observations include 16 intermediate and high dispersion spectra of LSI+61◦303 collected between January 1989 and February 1993. The H α emission line profile and its variations are analyzed. Several emission line parameters -- among them the H α EW and the width of the H α red hump -- change strongly at or close to radio maximum, and may exhibit periodic variability. We also observe a significant change in the peak separation. The H α profile of LSI+61◦303 does not seem peculiar for a Be star. However, several of the observed variations of the H α profile can probably be associated with the presence of the compact, secondary star.
Resumo:
En este proyecto se muestran las posibilidades de la visión estéreo para la visualización en monitores tanto de objetos simples como de grandes escenarios, así como su aplicación en juegos o en otros ámbitos como el cine, la geología e incluso la medicina. Para el desarrollo se ha usado una tarjeta con soporte 3d como la Nvidia 7600GT y una pantalla con una tasa de frecuencia alta como una ACER 19 pulgadas a 100Hz. Los resultados sobre la visualización han sido extraídos de las opiniones de un grupo de 20 personas, de diversas profesiones, no relacionadas con los gráficos por ordenador.
Resumo:
Detecting changes between images of the same scene taken at different times is of great interest for monitoring and understanding the environment. It is widely used for on-land application but suffers from different constraints. Unfortunately, Change detection algorithms require highly accurate geometric and photometric registration. This requirement has precluded their use in underwater imagery in the past. In this paper, the change detection techniques available nowadays for on-land application were analyzed and a method to automatically detect the changes in sequences of underwater images is proposed. Target application scenarios are habitat restoration sites, or area monitoring after sudden impacts from hurricanes or ship groundings. The method is based on the creation of a 3D terrain model from one image sequence over an area of interest. This model allows for synthesizing textured views that correspond to the same viewpoints of a second image sequence. The generated views are photometrically matched and corrected against the corresponding frames from the second sequence. Standard change detection techniques are then applied to find areas of difference. Additionally, the paper shows that it is possible to detect false positives, resulting from non-rigid objects, by applying the same change detection method to the first sequence exclusively. The developed method was able to correctly find the changes between two challenging sequences of images from a coral reef taken one year apart and acquired with two different cameras
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
Observations of the extraordinarily bright optical afterglow (OA) of GRB 991208 started 2.1 d after the event. The flux decay constant of the OA in the R-band is -2.30 +/- 0.07 up to 5 d, which is very likely due to the jet effect, and after that it is followed by a much steeper decay with constant -3.2 +/- 0.2, the fastest one ever seen in a GRB OA. A negative detection in several all-sky films taken simultaneously to the event implies either a previous additional break prior to 2 d after the occurrence of the GRB (as expected from the jet effect). The existence of a second break might indicate a steepening in the electron spectrum or the superposition of two events. Once the afterglow emission vanished, contribution of a bright underlying SN is found, but the light curve is not sufficiently well sampled to rule out a dust echo explanation. Our determination of z = 0.706 indicates that GRB 991208 is at 3.7 Gpc, implying an isotropic energy release of 1.15 x 10E53 erg which may be relaxed by beaming by a factor > 100. Precise astrometry indicates that the GRB coincides within 0.2' with the host galaxy, thus given support to a massive star origin. The absolute magnitude is M_B = -18.2, well below the knee of the galaxy luminosity function and we derive a star-forming rate of 11.5 +/- 7.1 Mo/yr. The quasi-simultaneous broad-band photometric spectral energy distribution of the afterglow is determined 3.5 day after the burst (Dec 12.0) implying a cooling frequency below the optical band, i.e. supporting a jet model with p = -2.30 as the index of the power-law electron distribution.
Resumo:
Johnson CCD photometry was performed in the two subgroups of the association Cepheus OB3, for selected fields each containing at least one star with previous UBV photoelectric photometry. Photometry for about 1000 stars down to visual magnitude 21 is provided, although the completeness tests show that the sample is complete down to V=19mag. Individual errors were assigned to the magnitude and colours for each star. Colour-colour and colour-magnitude diagrams are shown. Astrometric positions of the stars are also given. Description of the reduction procedure is fully detailed.