982 resultados para depth image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retinal blurring resulting from the human eye's depth of focus has been shown to assist visual perception. Infinite focal depth within stereoscopically displayed virtual environments may cause undesirable effects, for instance, objects positioned at a distance in front of or behind the observer's fixation point will be perceived in sharp focus with large disparities thereby causing diplopia. Although published research on incorporation of synthetically generated Depth of Field (DoF) suggests that this might act as an enhancement to perceived image quality, no quantitative testimonies of perceptional performance gains exist. This may be due to the difficulty of dynamic generation of synthetic DoF where focal distance is actively linked to fixation distance. In this paper, such a system is described. A desktop stereographic display is used to project a virtual scene in which synthetically generated DoF is actively controlled from vergence-derived distance. A performance evaluation experiment on this system which involved subjects carrying out observations in a spatially complex virtual environment was undertaken. The virtual environment consisted of components interconnected by pipes on a distractive background. The subject was tasked with making an observation based on the connectivity of the components. The effects of focal depth variation in static and actively controlled focal distance conditions were investigated. The results and analysis are presented which show that performance gains may be achieved by addition of synthetic DoF. The merits of the application of synthetic DoF are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method has been developed to estimate aerosol optical depth (AOD) over land surfaces using high spatial resolution, hyperspectral, and multiangle Compact High Resolution Imaging Spectrometer (CHRIS)/Project for On Board Autonomy (PROBA) images. The CHRIS instrument is mounted aboard the PROBA satellite and provides up to 62 bands. The PROBA satellite allows pointing to obtain imagery from five different view angles within a short time interval. The method uses inversion of a coupled surface/atmosphere radiative transfer model and includes a general physical model of angular surface reflectance. An iterative process is used to determine the optimum value providing the best fit of the corrected reflectance values for a number of view angles and wavelengths with those provided by the physical model. This method has previously been demonstrated on data from the Advanced Along-Track Scanning Radiometer and is extended here to the spectral and angular sampling of CHRIS/PROBA. The values obtained from these observations are validated using ground-based sun-photometer measurements. Results from 22 image sets show an rms error of 0.11 in AOD at 550 nm, which is reduced to 0.06 after an automatic screening procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Techniques to retrieve reliable images from complicated objects are described, overcoming problems introduced by uneven surfaces, giving enhanced depth resolution and improving image contrast. The techniques are illustrated with application to THz imaging of concealed wall paintings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the contribution of stereoscopic depth cues to the reliability of ordinal depth judgments in complex natural scenes. Participants viewed photographs of cluttered natural scenes, either monocularly or stereoscopically. On each trial, they judged which of two indicated points in the scene was closer in depth. We assessed the reliability of these judgments over repeated trials, and how well they correlated with the actual disparities of the points between the left and right eyes' views. The reliability of judgments increased as their depth separation increased, was higher when the points were on separate objects, and deteriorated for point pairs that were more widely separated in the image plane. Stereoscopic viewing improved sensitivity to depth for points on the same surface, but not for points on separate objects. Stereoscopic viewing thus provides depth information that is complementary to that available from monocular occlusion cues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image stitching is the process of joining several images to obtain a bigger view of a scene. It is used, for example, in tourism to transmit to the viewer the sensation of being in another place. I am presenting an inexpensive solution for automatic real time video and image stitching with two web cameras as the video/image sources. The proposed solution relies on the usage of several markers in the scene as reference points for the stitching algorithm. The implemented algorithm is divided in four main steps, the marker detection, camera pose determination (in reference to the markers), video/image size and 3d transformation, and image translation. Wii remote controllers are used to support several steps in the process. The built‐in IR camera provides clean marker detection, which facilitates the camera pose determination. The only restriction in the algorithm is that markers have to be in the field of view when capturing the scene. Several tests where made to evaluate the final algorithm. The algorithm is able to perform video stitching with a frame rate between 8 and 13 fps. The joining of the two videos/images is good with minor misalignments in objects at the same depth of the marker,misalignments in the background and foreground are bigger. The capture process is simple enough so anyone can perform a stitching with a very short explanation. Although real‐time video stitching can be achieved by this affordable approach, there are few shortcomings in current version. For example, contrast inconsistency along the stitching line could be reduced by applying a color correction algorithm to every source videos. In addition, the misalignments in stitched images due to camera lens distortion could be eased by optical correction algorithm. The work was developed in Apple’s Quartz Composer, a visual programming environment. A library of extended functions was developed using Xcode tools also from Apple.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stretch zone width (SZW) data for 15-5PH steel CTOD specimens fractured at -150 degrees C to + 23 degrees C temperature were measured based on focused images and 3D maps obtained by extended depth-of-field reconstruction from light microscopy (LM) image stacks. This LM-based method, with a larger lateral resolution, seems to be as effective for quantitative analysis of SZW as scanning electron microscopy (SEM) or confocal scanning laser microscopy (CSLM), permitting to clearly identify stretch zone boundaries. Despite the worst sharpness of focused images, a robust linear correlation was established to fracture toughness (KC) and SZW data for the 15-5PH steel tested specimens, measured at their center region. The method is an alternative to evaluate the boundaries of stretched zones, at a lower cost of implementation and training, since topographic data from elevation maps can be associated with reconstructed image, which summarizes the original contrast and brightness information. Finally, the extended depth-of-field method is presented here as a valuable tool for failure analysis, as a cheaper alternative to investigate rough surfaces or fracture, compared to scanning electron or confocal light microscopes. Microsc. Res. Tech. 75:11551158, 2012. (C) 2012 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To compare simulated periodontal bone defect depth measured in digital radiographs with dedicated and non-dedicated software systems and to compare the depth measurements from each program with the measurements in dry mandibles.Methods: Forty periodontal bone defects were created at the proximal area of the first premolar in dry pig mandibles. Measurements of the defects were performed with a periodontal probe in the dry mandible. Periapical digital radiographs of the defects were recorded using the Schick sensor in a standardized exposure setting. All images were read using a Schick dedicated software system (CDR DICOM for Windows v.3.5), and three commonly available non-dedicated software systems (Vix Win 2000 v.1.2; Adobe Photoshop 7.0 and Image Tool 3.0). The defects were measured three times in each image and a consensus was reached among three examiners using the four software systems. The difference between the radiographic measurements was analysed using analysis of variance (ANOVA) and by comparing the measurements from each software system with the dry mandibles measurements using Student's t-test.Results: the mean values of the bone defects measured in the radiographs were 5.07 rum, 5.06 rum, 5.01 mm and 5.11 mm for CDR Digital Image and Communication in Medicine (DICOM) for Windows, Vix Win, Adobe Photoshop, and Image Tool, respectively, and 6.67 mm for the dry mandible. The means of the measurements performed in the four software systems were not significantly different, ANOVA (P = 0.958). A significant underestimation of defect depth was obtained when we compared the mean depths from each software system with the dry mandible measurements (t-test; P congruent to 0.000).Conclusions: the periodontal bone defect measurements in dedicated and in three non-dedicated software systems were not significantly different, but they all underestimated the measurements when compared with the measurements obtained in the dry mandibles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This study was performed to compare the inverted digital images and film-based images of dry pig mandibles to measure the periodontal bone defect depth. Materials and Methods: Forty 2-wall bone defects were made in the proximal region of the premolar in the dry pig mandibles. The digital and conventional radiographs were taken using a Schick sensor and Kodak F-speed intraoral film. Image manipulation (inversion) was performed using Adobe Photoshop 7.0 software. Four trained examiners made all of the radiographic measurements in millimeters a total of three times from the cementoenamel junction to the most apical extension of the bone loss with both types of images: inverted digital and film. The measurements were also made in dry mandibles using a periodontal probe and digital caliper. The Student's t-test was used to compare the depth measurements obtained from the two types of images and direct visual measurement in the dry mandibles. A significance level of 0.05 for a 95% confidence interval was used for each comparison. Results: There was a significant difference between depth measurements in the inverted digital images and direct visual measurements (p>|t|=0.0039), with means of 6.29 mm (IC95%:6.04-6.54) and 6.79 mm (IC95%:6.45-7.11), respectively. There was a non-significant difference between the film-based radiographs and direct visual measurements (p>|t|=0.4950), with means of 6.64mm (IC95%:6.40-6.89) and 6.79mm(IC95%:6.45-7.11), respectively. Conclusion: The periodontal bone defect measurements in the inverted digital images were inferior to film-based radiographs, underestimating the amount of bone loss. copy; 2012 by Korean Academy of Oral and Maxillofacial Radiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limitations associated with the visual information provided to surgeons during laparoscopic surgery increases the difficulty of procedures and thus, reduces clinical indications and increases training time. This work presents a novel augmented reality visualization approach that aims to improve visual data supplied for the targeting of non visible anatomical structures in laparoscopic visceral surgery. The approach aims to facilitate the localisation of hidden structures with minimal damage to surrounding structures and with minimal training requirements. The proposed augmented reality visualization approach incorporates endoscopic images overlaid with virtual 3D models of underlying critical structures in addition to targeting and depth information pertaining to targeted structures. Image overlay was achieved through the implementation of camera calibration techniques and integration of the optically tracked endoscope into an existing image guidance system for liver surgery. The approach was validated in accuracy, clinical integration and targeting experiments. Accuracy of the overlay was found to have a mean value of 3.5 mm ± 1.9 mm and 92.7% of targets within a liver phantom were successfully located laparoscopically by non trained subjects using the approach.