32 resultados para ILLUMINATION

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In several computer graphics areas, a refinement criterion is often needed to decide whether to goon or to stop sampling a signal. When the sampled values are homogeneous enough, we assume thatthey represent the signal fairly well and we do not need further refinement, otherwise more samples arerequired, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is verysensitive to variability is necessary. In this paper, we present a family of discrimination measures, thef-divergences, meeting this requirement. These convex functions have been well studied and successfullyapplied to image processing and several areas of engineering. Two applications to global illuminationare shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. Weobtain significantly better results than with classic criteria, showing that f-divergences are worth furtherinvestigation in computer graphics. Also a discrimination measure based on entropy of the samples forrefinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a naturalmethod to deal with the adaptive subdivision of the sampling region

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changes in the angle of illumination incident upon a 3D surface texture can significantly alter its appearance, implying variations in the image texture. These texture variations produce displacements of class members in the feature space, increasing the failure rates of texture classifiers. To avoid this problem, a model-based texture recognition system which classifies textures seen from different distances and under different illumination directions is presented in this paper. The system works on the basis of a surface model obtained by means of 4-source colour photometric stereo, used to generate 2D image textures under different illumination directions. The recognition system combines coocurrence matrices for feature extraction with a Nearest Neighbour classifier. Moreover, the recognition allows one to guess the approximate direction of the illumination used to capture the test image

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1st chapter of this work presents the different experiments and collaborations in which I am involved during my PhD studies of Physics. Following those descriptions, the 2nd chapter is dedicated to how the radiation affects the silicon sensors, as well as some experimental measurements carried out at CERN (Geneve, Schwitzerland) and IFIC (Valencia, Spain) laboratories. Besides the previous investigation results, this chapter includes the most recent scientific papers appeared in the latest RD50 (Research & Development #50) Status Report, published in January 2007, as well as some others published this year. The 3rd and 4th are dedicated to the simulation of the electrical behavior of solid state detectors. In chapter 3 are reported the results obtained for the illumination of edgeless detectors irradiated at different fluences, in the framework of the TOSTER Collaboration. The 4th chapter reports about simulation design, simulation and fabrication of a novel 3D detector developed at CNM for ions detection in the future ITER fusion reactor. This chapter will be extended with irradiation simulations and experimental measurements in my PhD Thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aquest projecte és una part d’un projecte més ampli consistent en estudiar un format gràfic que permeti exportar una escena modelada en Blender i importar aquesta mateixa escena en un entorn interactiu basat en Visual C++ amb OpenGL. D’aquesta forma, disposem de la capacitat de modelat de Blender i de la interacció i visualització de la llibreria OpenGL. Aquest format ha de representar geometria i textures imprescindiblement, i si és possible, d’altres factors importants com il·luminació, visualització i moviment. La part del projecte explicada en aquesta memòria consisteix en estudiar el format gràfic més adient per representar els diferents factors de realisme de l’escena (geometria, textura, etc.) havent triat el format OBJ per la seva capacitat de representació i fàcil edició. Per a provar el format, s’ha dissenyat un diorama de pessebre utilitzant les capacitats de modelatge de Blender. Pel que respecta les figures, aspecte important per a considerar l’escena com a pessebre, s’ha utilitzat un escàner 3D que ha obtingut representacions de malla 3D, a partir de figures reals de pessebre, que posteriorment han estat texturades. S’ha generat un vídeo del diorama de pessebre que permet veure’n tots els detalls navegant amb el punt de vista per l’escena. Aquest vídeo s’ha exposat en la mostra de pessebres de la Associació Pessebrista de Sabadell el Nadal del 2008.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El procés de fusió de dues o més imatges de la mateixa escena en una d'única i més gran és conegut com a Image Mosaicing. Un cop finalitzat el procés de construcció d'un mosaic, els límits entre les imatges són habitualment visibles, degut a imprecisions en els registres fotomètric i geomètric. L'Image Blending és l'etapa del procediment de mosaicing a la que aquests artefactes són minimitzats o suprimits. Existeixen diverses metodologies a la literatura que tracten aquests problemes, però la majoria es troben orientades a la creació de panorames terrestres, imatges artístiques d'alta resolució o altres aplicacions a les quals el posicionament de la càmera o l'adquisició de les imatges no són etapes rellevants. El treball amb imatges subaquàtiques presenta desafiaments importants, degut a la presència d'scattering (reflexions de partícules en suspensió) i atenuació de la llum i a condicions físiques extremes a milers de metres de profunditat, amb control limitat dels sistemes d'adquisició i la utilització de tecnologia d'alt cost. Imatges amb il·luminació artificial similar, sense llum global com la oferta pel sol, han de ser unides sense mostrar una unió perceptible. Les imatges adquirides a gran profunditat presenten una qualitat altament depenent de la profunditat, i la seva degradació amb aquest factor és molt rellevant. El principal objectiu del treball és presentar dels principals problemes de la imatge subaquàtica, seleccionar les estratègies més adequades i tractar tota la seqüència adquisició-procesament-visualització del procés. Els resultats obtinguts demostren que la solució desenvolupada, basada en una Estratègia de Selecció de Límit Òptim, Fusió en el Domini del Gradient a les regions comunes i Emfatització Adaptativa d'Imatges amb baix nivell de detall permet obtenir uns resultats amb una alta qualitat. També s'ha proposat una estratègia, amb possibilitat d'implementació paral·lela, que permet processar mosaics de kilòmetres d'extensió amb resolució de centímetres per píxel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L¿abast del projecte és el servei del sistema de gestió d¿enllumenat d¿un edifici d¿oficines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the improvements achieved in our mosaicking system to assist unmanned underwater vehicle navigation. A major advance has been attained in the processing of images of the ocean floor when light absorption effects are evident. Due to the absorption of natural light, underwater vehicles often require artificial light sources attached to them to provide the adequate illumination for processing underwater images. Unfortunately, these flashlights tend to illuminate the scene in a nonuniform fashion. In this paper a technique to correct non-uniform lighting is proposed. The acquired frames are compensated through a point-by-point division of the image by an estimation of the illumination field. Then, the gray-levels of the obtained image remapped to enhance image contrast. Experiments with real images are presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major obstacle to processing images of the ocean floor comes from the absorption and scattering effects of the light in the aquatic environment. Due to the absorption of the natural light, underwater vehicles often require artificial light sources attached to them to provide the adequate illumination. Unfortunately, these flashlights tend to illuminate the scene in a nonuniform fashion, and, as the vehicle moves, induce shadows in the scene. For this reason, the first step towards application of standard computer vision techniques to underwater imaging requires dealing first with these lighting problems. This paper analyses and compares existing methodologies to deal with low-contrast, nonuniform illumination in underwater image sequences. The reviewed techniques include: (i) study of the illumination-reflectance model, (ii) local histogram equalization, (iii) homomorphic filtering, and, (iv) subtraction of the illumination field. Several experiments on real data have been conducted to compare the different approaches

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great number of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. For normalised correlation criteria, previous experiments shown that the result is not altered in presence of nonuniform illumination. Usually, hardware for motion estimation has been limited to simple correlation criteria. The main goal of this paper is to propose a VLSI architecture for motion estimation using a matching criteria more complex than Sum of Absolute Differences (SAD) criteria. Today hardware devices provide many facilities for the integration of more and more complex designs as well as the possibility to easily communicate with general purpose processors

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Realistic rendering animation is known to be an expensive processing task when physically-based global illumination methods are used in order to improve illumination details. This paper presents an acceleration technique to compute animations in radiosity environments. The technique is based on an interpolated approach that exploits temporal coherence in radiosity. A fast global Monte Carlo pre-processing step is introduced to the whole computation of the animated sequence to select important frames. These are fully computed and used as a base for the interpolation of all the sequence. The approach is completely view-independent. Once the illumination is computed, it can be visualized by any animated camera. Results present significant high speed-ups showing that the technique could be an interesting alternative to deterministic methods for computing non-interactive radiosity animations for moderately complex scenarios

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The classification of Art painting images is a computer vision applications that isgrowing considerably. The goal of this technology, is to classify an art paintingimage automatically, in terms of artistic style, technique used, or its author. For thispurpose, the image is analyzed extracting some visual features. Many articlesrelated with these problems have been issued, but in general the proposed solutionsare focused in a very specific field. In particular, algorithms are tested using imagesat different resolutions, acquired under different illumination conditions. Thatmakes complicate the performance comparison of the different methods. In thiscontext, it will be very interesting to construct a public art image database, in orderto compare all the existing algorithms under the same conditions. This paperpresents a large art image database, with their corresponding labels according to thefollowing characteristics: title, author, style and technique. Furthermore, a tool thatmanages this database have been developed, and it can be used to extract differentvisual features for any selected image. This data can be exported to a file in CSVformat, allowing researchers to analyze the data with other tools. During the datacollection, the tool stores the elapsed time in the calculation. Thus, this tool alsoallows to compare the efficiency, in computation time, of different mathematicalprocedures for extracting image data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.