992 resultados para depth image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image stitching is the process of joining several images to obtain a bigger view of a scene. It is used, for example, in tourism to transmit to the viewer the sensation of being in another place. I am presenting an inexpensive solution for automatic real time video and image stitching with two web cameras as the video/image sources. The proposed solution relies on the usage of several markers in the scene as reference points for the stitching algorithm. The implemented algorithm is divided in four main steps, the marker detection, camera pose determination (in reference to the markers), video/image size and 3d transformation, and image translation. Wii remote controllers are used to support several steps in the process. The built‐in IR camera provides clean marker detection, which facilitates the camera pose determination. The only restriction in the algorithm is that markers have to be in the field of view when capturing the scene. Several tests where made to evaluate the final algorithm. The algorithm is able to perform video stitching with a frame rate between 8 and 13 fps. The joining of the two videos/images is good with minor misalignments in objects at the same depth of the marker,misalignments in the background and foreground are bigger. The capture process is simple enough so anyone can perform a stitching with a very short explanation. Although real‐time video stitching can be achieved by this affordable approach, there are few shortcomings in current version. For example, contrast inconsistency along the stitching line could be reduced by applying a color correction algorithm to every source videos. In addition, the misalignments in stitched images due to camera lens distortion could be eased by optical correction algorithm. The work was developed in Apple’s Quartz Composer, a visual programming environment. A library of extended functions was developed using Xcode tools also from Apple.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stretch zone width (SZW) data for 15-5PH steel CTOD specimens fractured at -150 degrees C to + 23 degrees C temperature were measured based on focused images and 3D maps obtained by extended depth-of-field reconstruction from light microscopy (LM) image stacks. This LM-based method, with a larger lateral resolution, seems to be as effective for quantitative analysis of SZW as scanning electron microscopy (SEM) or confocal scanning laser microscopy (CSLM), permitting to clearly identify stretch zone boundaries. Despite the worst sharpness of focused images, a robust linear correlation was established to fracture toughness (KC) and SZW data for the 15-5PH steel tested specimens, measured at their center region. The method is an alternative to evaluate the boundaries of stretched zones, at a lower cost of implementation and training, since topographic data from elevation maps can be associated with reconstructed image, which summarizes the original contrast and brightness information. Finally, the extended depth-of-field method is presented here as a valuable tool for failure analysis, as a cheaper alternative to investigate rough surfaces or fracture, compared to scanning electron or confocal light microscopes. Microsc. Res. Tech. 75:11551158, 2012. (C) 2012 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To compare simulated periodontal bone defect depth measured in digital radiographs with dedicated and non-dedicated software systems and to compare the depth measurements from each program with the measurements in dry mandibles.Methods: Forty periodontal bone defects were created at the proximal area of the first premolar in dry pig mandibles. Measurements of the defects were performed with a periodontal probe in the dry mandible. Periapical digital radiographs of the defects were recorded using the Schick sensor in a standardized exposure setting. All images were read using a Schick dedicated software system (CDR DICOM for Windows v.3.5), and three commonly available non-dedicated software systems (Vix Win 2000 v.1.2; Adobe Photoshop 7.0 and Image Tool 3.0). The defects were measured three times in each image and a consensus was reached among three examiners using the four software systems. The difference between the radiographic measurements was analysed using analysis of variance (ANOVA) and by comparing the measurements from each software system with the dry mandibles measurements using Student's t-test.Results: the mean values of the bone defects measured in the radiographs were 5.07 rum, 5.06 rum, 5.01 mm and 5.11 mm for CDR Digital Image and Communication in Medicine (DICOM) for Windows, Vix Win, Adobe Photoshop, and Image Tool, respectively, and 6.67 mm for the dry mandible. The means of the measurements performed in the four software systems were not significantly different, ANOVA (P = 0.958). A significant underestimation of defect depth was obtained when we compared the mean depths from each software system with the dry mandible measurements (t-test; P congruent to 0.000).Conclusions: the periodontal bone defect measurements in dedicated and in three non-dedicated software systems were not significantly different, but they all underestimated the measurements when compared with the measurements obtained in the dry mandibles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This study was performed to compare the inverted digital images and film-based images of dry pig mandibles to measure the periodontal bone defect depth. Materials and Methods: Forty 2-wall bone defects were made in the proximal region of the premolar in the dry pig mandibles. The digital and conventional radiographs were taken using a Schick sensor and Kodak F-speed intraoral film. Image manipulation (inversion) was performed using Adobe Photoshop 7.0 software. Four trained examiners made all of the radiographic measurements in millimeters a total of three times from the cementoenamel junction to the most apical extension of the bone loss with both types of images: inverted digital and film. The measurements were also made in dry mandibles using a periodontal probe and digital caliper. The Student's t-test was used to compare the depth measurements obtained from the two types of images and direct visual measurement in the dry mandibles. A significance level of 0.05 for a 95% confidence interval was used for each comparison. Results: There was a significant difference between depth measurements in the inverted digital images and direct visual measurements (p>|t|=0.0039), with means of 6.29 mm (IC95%:6.04-6.54) and 6.79 mm (IC95%:6.45-7.11), respectively. There was a non-significant difference between the film-based radiographs and direct visual measurements (p>|t|=0.4950), with means of 6.64mm (IC95%:6.40-6.89) and 6.79mm(IC95%:6.45-7.11), respectively. Conclusion: The periodontal bone defect measurements in the inverted digital images were inferior to film-based radiographs, underestimating the amount of bone loss. copy; 2012 by Korean Academy of Oral and Maxillofacial Radiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limitations associated with the visual information provided to surgeons during laparoscopic surgery increases the difficulty of procedures and thus, reduces clinical indications and increases training time. This work presents a novel augmented reality visualization approach that aims to improve visual data supplied for the targeting of non visible anatomical structures in laparoscopic visceral surgery. The approach aims to facilitate the localisation of hidden structures with minimal damage to surrounding structures and with minimal training requirements. The proposed augmented reality visualization approach incorporates endoscopic images overlaid with virtual 3D models of underlying critical structures in addition to targeting and depth information pertaining to targeted structures. Image overlay was achieved through the implementation of camera calibration techniques and integration of the optically tracked endoscope into an existing image guidance system for liver surgery. The approach was validated in accuracy, clinical integration and targeting experiments. Accuracy of the overlay was found to have a mean value of 3.5 mm ± 1.9 mm and 92.7% of targets within a liver phantom were successfully located laparoscopically by non trained subjects using the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study was performed to determine the feasability of using a measurement of ribeye depth (RED) from a longitudinal ultrasound image to estimate ribeye area (REA). The correlation between RED obtained with ultrasound and REA from a tracing was high for both implanted (r = .49) and non-implanted (r = .45) steers. The mean bias between predicted REA and actual REA was not different from zero. This analysis shows that RED could be an accurate indicator of REA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HYPOTHESIS A previously developed image-guided robot system can safely drill a tunnel from the lateral mastoid surface, through the facial recess, to the middle ear, as a viable alternative to conventional mastoidectomy for cochlear electrode insertion. BACKGROUND Direct cochlear access (DCA) provides a minimally invasive tunnel from the lateral surface of the mastoid through the facial recess to the middle ear for cochlear electrode insertion. A safe and effective tunnel drilled through the narrow facial recess requires a highly accurate image-guided surgical system. Previous attempts have relied on patient-specific templates and robotic systems to guide drilling tools. In this study, we report on improvements made to an image-guided surgical robot system developed specifically for this purpose and the resulting accuracy achieved in vitro. MATERIALS AND METHODS The proposed image-guided robotic DCA procedure was carried out bilaterally on 4 whole head cadaver specimens. Specimens were implanted with titanium fiducial markers and imaged with cone-beam CT. A preoperative plan was created using a custom software package wherein relevant anatomical structures of the facial recess were segmented, and a drill trajectory targeting the round window was defined. Patient-to-image registration was performed with the custom robot system to reference the preoperative plan, and the DCA tunnel was drilled in 3 stages with progressively longer drill bits. The position of the drilled tunnel was defined as a line fitted to a point cloud of the segmented tunnel using principle component analysis (PCA function in MatLab). The accuracy of the DCA was then assessed by coregistering preoperative and postoperative image data and measuring the deviation of the drilled tunnel from the plan. The final step of electrode insertion was also performed through the DCA tunnel after manual removal of the promontory through the external auditory canal. RESULTS Drilling error was defined as the lateral deviation of the tool in the plane perpendicular to the drill axis (excluding depth error). Errors of 0.08 ± 0.05 mm and 0.15 ± 0.08 mm were measured on the lateral mastoid surface and at the target on the round window, respectively (n =8). Full electrode insertion was possible for 7 cases. In 1 case, the electrode was partially inserted with 1 contact pair external to the cochlea. CONCLUSION The purpose-built robot system was able to perform a safe and reliable DCA for cochlear implantation. The workflow implemented in this study mimics the envisioned clinical procedure showing the feasibility of future clinical implementation.