951 resultados para Morphing Alteration Detection Image Warping
Resumo:
OBJECTIVES: To assess magnetic resonance (MR)-colonography (MRC) for detection of colorectal lesions using two different T1w three-dimensional (3D)-gradient-recalled echo (GRE)-sequences and integrated parallel data acquisition (iPAT) at a 3.0 Tesla MR-unit. MATERIALS AND METHODS: In this prospective study, 34 symptomatic patients underwent dark lumen MRC at a 3.0 Tesla unit before conventional colonoscopy (CC). After colon distension with tap water, 2 high-resolution T1w 3D-GRE [3-dimensional fast low angle shot (3D-FLASH), iPAT factor 2 and 3D-volumetric interpolated breathhold examination (VIBE), iPAT 3] sequences were acquired without and after bolus injection of gadolinium. Prospective evaluation of MRC was performed. Image quality of the different sequences was assessed qualitatively and quantitatively. The findings of the same day CC served as standard of reference. RESULTS: MRC identified all polyps >5 mm (16 of 16) in size and all carcinomas (4 of 4) correctly. Fifty percent of the small polyps =5 mm (4 of 8) were visualized by MRC. Diagnostic quality was excellent in 94% (384 of 408 colonic segments) using the 3D-FLASH and in 92% (376 of 408) for the VIBE. The 3D-FLASH sequence showed a 3-fold increase in signal-to-noise ratio (8 +/- 3.3 standard deviation (SD) in lesions without contrast enhancement (CE); 24.3 +/- 7.8 SD after CE). For the 3D-VIBE sequence, signal-to-noise ratio doubled in the detected lesions (147 +/- 54 SD without and 292 +/- 168 SD after CE). Although image quality was ranked lower in the VIBE, the image quality score of both sequences showed no statistical significant difference (chi > 0.6). CONCLUSIONS: MRC using 3D-GRE-sequences and iPAT is feasible at 3.0 T-systems. The high-resolution 3D-FLASH was slightly preferred over the 3D-VIBE because of better image quality, although both used sequences showed no statistical significant difference.
Resumo:
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.
Resumo:
Exposure Fusion and other HDR techniques generate well-exposed images from a bracketed image sequence while reproducing a large dynamic range that far exceeds the dynamic range of a single exposure. Common to all these techniques is the problem that the smallest movements in the captured images generate artefacts (ghosting) that dramatically affect the quality of the final images. This limits the use of HDR and Exposure Fusion techniques because common scenes of interest are usually dynamic. We present a method that adapts Exposure Fusion, as well as standard HDR techniques, to allow for dynamic scene without introducing artefacts. Our method detects clusters of moving pixels within a bracketed exposure sequence with simple binary operations. We show that the proposed technique is able to deal with a large amount of movement in the scene and different movement configurations. The result is a ghost-free and highly detailed exposure fused image at a low computational cost.
Resumo:
Skin segmentation is a challenging task due to several influences such as unknown lighting conditions, skin colored background, and camera limitations. A lot of skin segmentation approaches were proposed in the past including adaptive (in the sense of updating the skin color online) and non-adaptive approaches. In this paper, we compare three skin segmentation approaches that are promising to work well for hand tracking, which is our main motivation for this work. Hand tracking can widely be used in VR/AR e.g. navigation and object manipulation. The first skin segmentation approach is a well-known non-adaptive approach. It is based on a simple, pre-computed skin color distribution. Methods two and three adaptively estimate the skin color in each frame utilizing clustering algorithms. The second approach uses a hierarchical clustering for a simultaneous image and color space segmentation, while the third approach is a pure color space clustering, but with a more sophisticated clustering approach. For evaluation, we compared the segmentation results of the approaches against a ground truth dataset. To obtain the ground truth dataset, we labeled about 500 images captured under various conditions.
Resumo:
We propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. Our algorithm works by estimating the displacements from image patches to the (unknown) landmark positions and then integrating them via voting. The fundamental contribution is that, we jointly estimate the displacements from all patches to multiple landmarks together, by considering not only the training data but also geometric constraints on the test image. The various constraints constitute a convex objective function that can be solved efficiently. Validated on three challenging datasets, our method achieves high accuracy in landmark detection, and, combined with statistical shape model, gives a better performance in shape segmentation compared to the state-of-the-art methods.
Resumo:
In attempts to elucidate the underlying mechanisms of spinal injuries and spinal deformities, several experimental and numerical studies have been conducted to understand the biomechanical behavior of the spine. However, numerical biomechanical studies suffer from uncertainties associated with hard- and soft-tissue anatomies. Currently, these parameters are identified manually on each mesh model prior to simulations. The determination of soft connective tissues on finite element meshes can be a tedious procedure, which limits the number of models used in the numerical studies to a few instances. In order to address these limitations, an image-based method for automatic morphing of soft connective tissues has been proposed. Results showed that the proposed method is capable to accurately determine the spatial locations of predetermined bony landmarks. The present method can be used to automatically generate patient-specific models, which may be helpful in designing studies involving a large number of instances and to understand the mechanical behavior of biomechanical structures across a given population.
Resumo:
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise e.g., Fundus photography, Optical Coherence Tomography (OCT), Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The presented article’s goal is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI which was not visible before like vessels and the macula. This article’s contributions include automatic detection of the optic disc, the fovea, the optic axis and an automatic segmentation of the vitreous humor of the eye.
Resumo:
In combined clinical optoacoustic (OA) and ultrasound (US) imaging, epi-mode irradiation and detection integrated into one single probe offers flexible imaging of the human body. The imaging depth in epi-illumination is, however, strongly affected by clutter. As shown in previous phantom experiments, the location of irradiation plays an important role in clutter generation. We investigated the influence of the irradiation geometry on the local image contrast of clinical images, by varying the separation distance between the irradiated area and the acoustic imaging plane of a linear ultrasound transducer in an automated scanning setup. The results for different volunteers show that the image contrast can be enhanced on average by 25% and locally by more than a factor of two, when the irradiated area is slightly separated from the probe. Our findings have an important impact on the design of future optoacoustic probes for clinical application.
Resumo:
PURPOSE To prospectively assess the diagnostic performance of diffusion-weighted (DW) magnetic resonance (MR) imaging in the detection of pelvic lymph node metastases in patients with prostate and/or bladder cancer staged as N0 with preoperative cross-sectional imaging. MATERIALS AND METHODS This study was approved by an independent ethics committee. Written informed consent was obtained from all patients. Patients with no enlarged lymph nodes on preoperative cross-sectional images who were scheduled for radical resection of the primary tumor and extended pelvic lymph node dissection were enrolled. All patients were examined with a 3-T MR unit, and examinations included conventional and DW MR imaging of the entire pelvis. Image analysis was performed by three independent readers blinded to any clinical information. Metastases were diagnosed on the basis of high signal intensity on high b value DW MR images and morphologic features (shape, border). Histopathologic examination served as the standard of reference. Sensitivity and specificity were calculated, and bias-corrected 95% confidence intervals (CIs) were obtained with the bootstrap method. The Fleiss and Cohen κ and median test were applied for statistical analyses. RESULTS A total of 4846 lymph nodes were resected in 120 patients. Eighty-eight lymph node metastases were found in 33 of 120 patients (27.5%). Short-axis diameter of these metastases was less than or equal to 3 mm in 68, more than 3 mm to 5 mm in 13, more than 5 mm to 8 mm in five; and more than 8 mm in two. On a per-patient level, the three readers correctly detected metastases in 26 (79%; 95% CI: 64%, 91%), 21 (64%; 95% CI: 45%, 79%), and 25 (76%; 95% CI: 60%, 90%) of the 33 patients with metastases, with respective specificities of 85% (95% CI: 78%, 92%), 79% (95% CI: 70%, 88%), and 84% (95% CI: 76%, 92%). Analyzed according to hemipelvis, lymph node metastases were detected with histopathologic examination in 44 of 240 pelvic sides (18%); the three readers correctly detected these on DW MR images in 26 (59%; 95% CI: 45%, 73%), 19 (43%; 95% CI: 27%, 57%), and 28 (64%; 95% CI: 47%, 78%) of the 44 cases. CONCLUSION DW MR imaging enables noninvasive detection of small lymph node metastases in normal-sized nodes in a substantial percentage of patients with prostate and bladder cancer diagnosed as N0 with conventional cross-sectional imaging techniques.
Resumo:
OBJECTIVE To evaluate treatment response of hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with a new real-time imaging fusion technique of contrast-enhanced ultrasound (CEUS) with multi-slice detection computed tomography (CT) in comparison to conventional post-interventional follow-up. MATERIAL AND METHODS 40 patients with HCC (26 male, ages 46-81 years) were evaluated 24 hours after TACE using CEUS with ultrasound volume navigation and image fusion with CT compared to non-enhanced CT and follow-up contrast-enhanced CT after 6-8 weeks. Reduction of tumor vascularization to less than 25% was regarded as "successful" treatment, whereas reduction to levels >25% was considered as "partial" treatment response. Homogenous lipiodol retention was regarded as successful treatment in non-enhanced CT. RESULTS Post-interventional image fusion of CEUS with CT was feasible in all 40 patients. In 24 patients (24/40), post-interventional image fusion with CEUS revealed residual tumor vascularity, that was confirmed by contrast-enhanced CT 6-8 weeks later in 24/24 patients. In 16 patients (16/40), post-interventional image fusion with CEUS demonstrated successful treatment, but follow-up CT detected residual viable tumor (6/16). Non-enhanced CT did not identify any case of treatment failure. Image fusion with CEUS assessed treatment efficacy with a specificity of 100%, sensitivity of 80% and a positive predictive value of 1 (negative predictive value 0.63). CONCLUSIONS Image fusion of CEUS with CT allows a reliable, highly specific post-interventional evaluation of embolization response with good sensitivity without any further radiation exposure. It can detect residual viable tumor at early state, resulting in a close patient monitoring or re-therapy.
Resumo:
Cephalometric analysis is an essential clinical and research tool in orthodontics for the orthodontic analysis and treatment planning. This paper presents the evaluation of the methods submitted to the Automatic Cephalometric X-Ray Landmark Detection Challenge, held at the IEEE International Symposium on Biomedical Imaging 2014 with an on-site competition. The challenge was set to explore and compare automatic landmark detection methods in application to cephalometric X-ray images. Methods were evaluated on a common database including cephalograms of 300 patients aged six to 60 years, collected from the Dental Department, Tri-Service General Hospital, Taiwan, and manually marked anatomical landmarks as the ground truth data, generated by two experienced medical doctors. Quantitative evaluation was performed to compare the results of a representative selection of current methods submitted to the challenge. Experimental results show that three methods are able to achieve detection rates greater than 80% using the 4 mm precision range, but only one method achieves a detection rate greater than 70% using the 2 mm precision range, which is the acceptable precision range in clinical practice. The study provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
Resumo:
The use of infrared thermography for the identification of lameness in cattle has increased in recent years largely because of its non-invasive properties, ease of automation and continued cost reductions. Thermography can be used to identify and determine thermal abnormalities in animals by characterizing an increase or decrease in the surface temperature of their skin. The variation in superficial thermal patterns resulting from changes in blood flow in particular can be used to detect inflammation or injury associated with conditions such as foot lesions. Thermography has been used not only as a diagnostic tool, but also to evaluate routine farm management. Since 2000, 14 peer reviewed papers which discuss the assessment of thermography to identify and manage lameness in cattle have been published. There was a large difference in thermography performance in these reported studies. However, thermography was demonstrated to have utility for the detection of contralateral temperature difference and maximum foot temperature on areas of interest. Also apparent in these publications was that a controlled environment is an important issue that should be considered before image scanning.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.