977 resultados para image noise modeling
Resumo:
AIM To compare the computed tomography (CT) dose and image quality with the filtered back projection against the iterative reconstruction and CT with a minimal electronic noise detector. METHODS A lung phantom (Chest Phantom N1 by Kyoto Kagaku) was scanned with 3 different CT scanners: the Somatom Sensation, the Definition Flash and the Definition Edge (all from Siemens, Erlangen, Germany). The scan parameters were identical to the Siemens presetting for THORAX ROUTINE (scan length 35 cm and FOV 33 cm). Nine different exposition levels were examined (reference mAs/peek voltage): 100/120, 100/100, 100/80, 50/120, 50/100, 50/80, 25/120, 25/100 and 25 mAs/80 kVp. Images from the SOMATOM Sensation were reconstructed using classic filtered back projection. Iterative reconstruction (SAFIRE, level 3) was performed for the two other scanners. A Stellar detector was used with the Somatom Definition Edge. The CT doses were represented by the dose length products (DLPs) (mGycm) provided by the scanners. Signal, contrast, noise and subjective image quality were recorded by two different radiologists with 10 and 3 years of experience in chest CT radiology. To determine the average dose reduction between two scanners, the integral of the dose difference was calculated from the lowest to the highest noise level. RESULTS When using iterative reconstruction (IR) instead of filtered back projection (FBP), the average dose reduction was 30%, 52% and 80% for bone, soft tissue and air, respectively, for the same image quality (P < 0.0001). The recently introduced Stellar detector (Sd) lowered the radiation dose by an additional 27%, 54% and 70% for bone, soft tissue and air, respectively (P < 0.0001). The benefit of dose reduction was larger at lower dose levels. With the same radiation dose, an average of 34% (22%-37%) and 25% (13%-46%) more contrast to noise was achieved by changing from FBP to IR and from IR to Sd, respectively. For the same contrast to noise level, an average of 59% (46%-71%) and 51% (38%-68%) dose reduction was produced for IR and Sd, respectively. For the same subjective image quality, the dose could be reduced by 25% (2%-42%) and 44% (33%-54%) using IR and Sd, respectively. CONCLUSION This study showed an average dose reduction between 27% and 70% for the new Stellar detector, which is equivalent to using IR instead of FBP.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
Resumo:
For patients with extensive bilobar colorectal liver metastases (CRLM), initial surgery may not be feasible and a multimodal approach including microwave ablation (MWA) provides the only chance for prolonged survival. Intraoperative navigation systems may improve the accuracy of ablation and surgical resection of so-called "vanishing lesions", ultimately improving patient outcome. Clinical application of intraoperative navigated liver surgery is illustrated in a patient undergoing combined resection/MWA for multiple, synchronous, bilobar CRLM. Regular follow-up with computed tomography (CT) allowed for temporal development of the ablation zones. Of the ten lesions detected in a preoperative CT scan, the largest lesion was resected and the others were ablated using an intraoperative navigation system. Twelve months post-surgery a new lesion (Seg IVa) was detected and treated by trans-arterial embolization. Nineteen months post-surgery new liver and lung metastases were detected and a palliative chemotherapy started. The patient passed away four years after initial diagnosis. For patients with extensive CRLM not treatable by standard surgery, navigated MWA/resection may provide excellent tumor control, improving longer-term survival. Intraoperative navigation systems provide precise, real-time information to the surgeon, aiding the decision-making process and substantially improving the accuracy of both ablation and resection. Regular follow-ups including 3D modeling allow for early discrimination between ablation zones and recurrent tumor lesions.
Resumo:
OBJECTIVES In this phantom CT study, we investigated whether images reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) with reduced tube voltage and current have equivalent quality. We evaluated the effects of different acquisition and reconstruction parameter settings on image quality and radiation doses. Additionally, patient CT studies were evaluated to confirm our phantom results. METHODS Helical and axial 256 multi-slice computed tomography scans of the phantom (Catphan(®)) were performed with varying tube voltages (80-140kV) and currents (30-200mAs). 198 phantom data sets were reconstructed applying FBP and IR with increasing iterations, and soft and sharp kernels. Further, 25 chest and abdomen CT scans, performed with high and low exposure per patient, were reconstructed with IR and FBP. Two independent observers evaluated image quality and radiation doses of both phantom and patient scans. RESULTS In phantom scans, noise reduction was significantly improved using IR with increasing iterations, independent from tissue, scan-mode, tube-voltage, current, and kernel. IR did not affect high-contrast resolution. Low-contrast resolution was also not negatively affected, but improved in scans with doses <5mGy, although object detectability generally decreased with the lowering of exposure. At comparable image quality levels, CTDIvol was reduced by 26-50% using IR. In patients, applying IR vs. FBP resulted in good to excellent image quality, while tube voltage and current settings could be significantly decreased. CONCLUSIONS Our phantom experiments demonstrate that image quality levels of FBP reconstructions can also be achieved at lower tube voltages and tube currents when applying IR. Our findings could be confirmed in patients revealing the potential of IR to significantly reduce CT radiation doses.
Resumo:
AIM To compare image quality and diagnostic confidence of 100 kVp CT pulmonary angiography (CTPA) in patients with body weights (BWs) below and above 100kg. MATERIALS AND METHODS The present retrospective study comprised 216 patients (BWs of 75-99kg, 114 patients; 100-125kg, 88 patients; >125kg, 14 patients), who received 100 kVp CTPA to exclude pulmonary embolism. The attenuation was measured and the contrast-to-noise ratio (CNR) was calculated in the pulmonary trunk. Size-specific dose estimates (SSDEs) were evaluated. Three blinded radiologists rated subjective image quality and diagnostic confidence. Results between the BW groups and between three body mass index (BMI) groups (BMI <25kg/m(2), BMI = 25-29.9kg/m(2), and BMI ≥30kg/m(2), i.e., normal weight, overweight, and obese patients) were compared using the Kruskal-Wallis test. RESULTS Vessel attenuation was higher and SDDE was lower in the 75-99kg group than at higher BWs (p-values between <0.001 and 0.03), with no difference between the 100-125 and >125kg groups (p = 0.892 and 1). Subjective image quality and diagnostic confidence were not different among the BW groups (p = 0.225 and 1). CNR was lower (p < 0.006) in obese patients than in normal weight or overweight subjects. Diagnostic confidence was not different in the BMI groups (p = 0.105). CONCLUSION CTPA at 100 kVp tube voltage can be used in patients weighing up to 125kg with no significant deterioration of subjective image quality and confidence. The applicability of 100 kVp in the 125-150kg BW range needs further testing in larger collectives.
Resumo:
OBJECTIVES To find a threshold body weight (BW) below 100 kg above which computed tomography pulmonary angiography (CTPA) using reduced radiation and a reduced contrast material (CM) dose provides significantly impaired quality and diagnostic confidence compared with standard-dose CTPA. METHODS In this prospectively randomised study of 501 patients with suspected pulmonary embolism and BW <100 kg, 246 were allocated into the low-dose group (80 kVp, 75 ml CM) and 255 into the normal-dose group (100 kVp, 100 ml CM). Contrast-to-noise ratio (CNR) in the pulmonary trunk was calculated. Two blinded chest radiologists independently evaluated subjective image quality and diagnostic confidence. Data were compared between the normal-dose and low-dose groups in five BW subgroups. RESULTS Vessel attenuation did not differ between the normal-dose and low-dose groups within each BW subgroup (P = 1.0). The CNR was higher with the normal-dose compared with the low-dose protocol (P < 0.006) in all BW subgroups except for the 90-99 kg subgroup (P = 0.812). Subjective image quality and diagnostic confidence did not differ between CT protocols in all subgroups (P between 0.960 and 1.0). CONCLUSIONS Subjective image quality and diagnostic confidence with 80 kVp CTPA is not different from normal-dose protocol in any BW group up to 100 kg. KEY POINTS • 80 kVp CTPA is safe in patients weighing <100 kg • Reduced radiation and iodine dose still provide high vessel attenuation • Image quality and diagnostic confidence with low-dose CTPA is good • Diagnostic confidence does not deteriorate in obese patients weighing <100 kg.
Resumo:
The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.
Resumo:
We present a novel algorithm to reconstruct high-quality images from sampled pixels and gradients in gradient-domain rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per-pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.
Resumo:
In the field of detection and monitoring of dynamic objects in quasi-static scenes, background subtraction techniques where background is modeled at pixel-level, although showing very significant limitations, are extensively used. In this work we propose a novel approach to background modeling that operates at region-level in a wavelet based multi-resolution framework. Based on a segmentation of the background, characterization is made for each region independently as a mixture of K Gaussian modes, considering the model of the approximation and detail coefficients at the different wavelet decomposition levels. Background region characterization is updated along time, and the detection of elements of interest is carried out computing the distance between background region models and those of each incoming image in the sequence. The inclusion of the context in the modeling scheme through each region characterization makes the model robust, being able to support not only gradual illumination and long-term changes, but also sudden illumination changes and the presence of strong shadows in the scene
Resumo:
Here, a novel and efficient moving object detection strategy by non-parametric modeling is presented. Whereas the foreground is modeled by combining color and spatial information, the background model is constructed exclusively with color information, thus resulting in a great reduction of the computational and memory requirements. The estimation of the background and foreground covariance matrices, allows us to obtain compact moving regions while the number of false detections is reduced. Additionally, the application of a tracking strategy provides a priori knowledge about the spatial position of the moving objects, which improves the performance of the Bayesian classifier
Resumo:
This paper presents a study on the effect of blurred images in hand biometrics. Blurred images simulates out-of-focus effects in hand image acquisition, a common consequence of unconstrained, contact-less and platform-free hand biometrics in mobile devices. The proposed biometric system presents a hand image segmentation based on multiscale aggregation, a segmentation method invariant to different changes like noise or blurriness, together with an innovative feature extraction and a template creation, oriented to obtain an invariant performance against blurring effects. The results highlight that the proposed system is invariant to some low degrees of blurriness, requiring an image quality control to detect and correct those images with a high degree of blurriness. The evaluation has considered a synthetic database created based on a publicly available database with 120 individuals. In addition, several biometric techniques could benefit from the approach proposed in this paper, since blurriness is a very common effect in biometric techniques involving image acquisition.