991 resultados para Point matching
Resumo:
Conclusion: A robot built specifically for stereotactic cochlear implantation provides equal or better accuracy levels together with a better integration into a clinical environment, when compared to existing approaches based on industrial robots. Objectives: To evaluate the technical accuracy of a robotic system developed specifically for lateral skull base surgery in an experimental setup reflecting the intended clinical application. The invasiveness of cochlear electrode implantation procedures may be reduced by replacing the traditional mastoidectomy with a small tunnel slightly larger in diameter than the electrode itself. Methods: The end-to-end accuracy of the robot system and associated image-guided procedure was evaluated on 15 temporal bones of whole head cadaver specimens. The main components of the procedure were as follows: reference screw placement, cone beam CT scan, computer-aided planning, pair-point matching of the surgical plan, robotic drilling of the direct access tunnel, and post-operative cone beam CT scan and accuracy assessment. Results: The mean accuracy at the target point (round window) was 0.56 ± 41 mm with an angular misalignment of 0.88 ± 0.41°. The procedural time of the registration process through the completion of the drilling procedure was 25 ± 11 min. The robot was fully operational in a clinical environment.
Resumo:
Surgical navigation has proven to be a minimally invasive procedure that enables precise surgical interventions with reduced exposure to irradiation for patient and personnel. Fluoroscopy-based modules have prevailed on the market. For certain operations of the pelvis computed tomography is necessary with its high imaging quality and considerably larger scan volume. To enable navigation in these cases, matching of the CT data set and the patient's real pelvic bone is essential. The common pair point-matching algorithm is complemented by the surface-matching algorithm to achieve an even higher overall precision of the system. For conventional surface matching with a solid pointer, the bone has to be exposed from soft tissue quite extensively, using a solid pointer. This conflicts with the claim of computer-assisted surgery to be minimally invasive. We integrated an A-mode ultrasonic pointer with the intention to perform extended surface matching on the pelvic bone noninvasively. Related to the conventional method, comparable and to some extent even improved precision conditions could be established.
Resumo:
BACKGROUND Neuronavigation has become an intrinsic part of preoperative surgical planning and surgical procedures. However, many surgeons have the impression that accuracy decreases during surgery. OBJECTIVE To quantify the decrease of neuronavigation accuracy and identify possible origins, we performed a retrospective quality-control study. METHODS Between April and July 2011, a neuronavigation system was used in conjunction with a specially prepared head holder in 55 consecutive patients. Two different neuronavigation systems were investigated separately. Coregistration was performed with laser-surface matching, paired-point matching using skin fiducials, anatomic landmarks, or bone screws. The initial target registration error (TRE1) was measured using the nasion as the anatomic landmark. Then, after draping and during surgery, the accuracy was checked at predefined procedural landmark steps (Mayfield measurement point and bone measurement point), and deviations were recorded. RESULTS After initial coregistration, the mean (SD) TRE1 was 2.9 (3.3) mm. The TRE1 was significantly dependent on patient positioning, lesion localization, type of neuroimaging, and coregistration method. The following procedures decreased neuronavigation accuracy: attachment of surgical drapes (DTRE2 = 2.7 [1.7] mm), skin retractor attachment (DTRE3 = 1.2 [1.0] mm), craniotomy (DTRE3 = 1.0 [1.4] mm), and Halo ring installation (DTRE3 = 0.5 [0.5] mm). Surgery duration was a significant factor also; the overall DTRE was 1.3 [1.5] mm after 30 minutes and increased to 4.4 [1.8] mm after 5.5 hours of surgery. CONCLUSION After registration, there is an ongoing loss of neuronavigation accuracy. The major factors were draping, attachment of skin retractors, and duration of surgery. Surgeons should be aware of this silent loss of accuracy when using neuronavigation.
Resumo:
Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.
Resumo:
Given a point set P and a class C of geometric objects, G(C)(P) is a geometric graph with vertex set P such that any two vertices p and q are adjacent if and only if there is some C is an element of C containing both p and q but no other points from P. We study G(del)(P) graphs where del is the class of downward equilateral triangles (i.e., equilateral triangles with one of their sides parallel to the x-axis and the corner opposite to this side below that side). For point sets in general position, these graphs have been shown to be equivalent to half-Theta(6) graphs and TD-Delaunay graphs. The main result in our paper is that for point sets P in general position, G(del)(P) always contains a matching of size at least vertical bar P vertical bar-1/3] and this bound is tight. We also give some structural properties of G(star)(P) graphs, where is the class which contains both upward and downward equilateral triangles. We show that for point sets in general position, the block cut point graph of G(star)(P) is simply a path. Through the equivalence of G(star)(P) graphs with Theta(6) graphs, we also derive that any Theta(6) graph can have at most 5n-11 edges, for point sets in general position. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.
Resumo:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
This paper is about localising across extreme lighting and weather conditions. We depart from the traditional point-feature-based approach as matching under dramatic appearance changes is a brittle and hard thing. Point feature detectors are fixed and rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the same criteria applied all images of all places. This paper takes a contrary view and asks what is possible if instead we learn a bespoke detector for every place. Our localisation task then turns into curating a large bank of spatially indexed detectors and we show that this yields vastly superior performance in terms of robustness in exchange for a reduced but tolerable metric precision. We present an unsupervised system that produces broad-region detectors for distinctive visual elements, called scene signatures, which can be associated across almost all appearance changes. We show, using 21km of data collected over a period of 3 months, that our system is capable of producing metric localisation estimates from night-to-day or summer-to-winter conditions.
Resumo:
In this paper, we present a new feature-based approach for mosaicing of camera-captured document images. A novel block-based scheme is employed to ensure that corners can be reliably detected over a wide range of images. 2-D discrete cosine transform is computed for image blocks defined around each of the detected corners and a small subset of the coefficients is used as a feature vector A 2-pass feature matching is performed to establish point correspondences from which the homography relating the input images could be computed. The algorithm is tested on a number of complex document images casually taken from a hand-held camera yielding convincing results.