9 resultados para Adaptive Image Binarization
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The rapid technical advances in computed tomography have led to an increased number of clinical indications. Unfortunately, at the same time the radiation exposure to the population has also increased due to the increased total number of CT examinations. In the last few years various publications have demonstrated the feasibility of radiation dose reduction for CT examinations with no compromise in image quality and loss in interpretation accuracy. The majority of the proposed methods for dose optimization are easy to apply and are independent of the detector array configuration. This article reviews indication-dependent principles (e.g. application of reduced tube voltage for CT angiography, selection of the collimation and the pitch, reducing the total number of imaging series, lowering the tube voltage and tube current for non-contrast CT scans), manufacturer-dependent principles (e.g. accurate application of automatic modulation of tube current, use of adaptive image noise filter and use of iterative image reconstruction) and general principles (e.g. appropriate patient-centering in the gantry, avoiding over-ranging of the CT scan, lowering the tube voltage and tube current for survey CT scans) which lead to radiation dose reduction.
Resumo:
To investigate whether an adaptive statistical iterative reconstruction (ASIR) algorithm improves the image quality at low-tube-voltage (80-kVp), high-tube-current (675-mA) multidetector abdominal computed tomography (CT) during the late hepatic arterial phase.
Resumo:
OBJECTIVE: The assessment of coronary stents with present-generation 64-detector row computed tomography (HDCT) scanners is limited by image noise and blooming artefacts. We evaluated the performance of adaptive statistical iterative reconstruction (ASIR) for noise reduction in coronary stent imaging with HDCT. METHODS AND RESULTS: In 50 stents of 28 patients (mean age 64 ± 10 years) undergoing coronary CT angiography (CCTA) on an HDCT scanner the mean in-stent luminal diameter, stent length, image quality, in-stent contrast attenuation, and image noise were assessed. Studies were reconstructed using filtered back projection (FBP) and ASIR-FBP composites. ASIR resulted in reduced image noise vs. FBP (P < 0.0001). Two readers graded the CCTA stent image quality on a 4-point Likert scale and determined the proportion of interpretable stent segments. The best image quality for all clinical images was obtained with 40 and 60% ASIR with significantly larger luminal area visualization compared with FBP (+42.1 ± 5.4% with 100% ASIR vs. FBP alone; P < 0.0001) while the stent length was decreased (-4.7 ± 0.9%,
image quality compared with FBP reconstruction.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.
Resumo:
New-onset impairment of ocular motility will cause incomitant strabismus, i.e., a gaze-dependent ocular misalignment. This ocular misalignment will cause retinal disparity, that is, a deviation of the spatial position of an image on the retina of both eyes, which is a trigger for a vergence eye movement that results in ocular realignment. If the vergence movement fails, the eyes remain misaligned, resulting in double vision. Adaptive processes to such incomitant vergence stimuli are poorly understood. In this study, we have investigated the physiological oculomotor response of saccadic and vergence eye movements in healthy individuals after shifting gaze from a viewing position without image disparity into a field of view with increased image disparity, thus in conditions mimicking incomitance. Repetitive saccadic eye movements into a visual field with increased stimulus disparity lead to a rapid modification of the oculomotor response: (a) Saccades showed immediate disconjugacy (p < 0.001) resulting in decreased retinal image disparity at the end of a saccade. (b) Vergence kinetics improved over time (p < 0.001). This modified oculomotor response enables a more prompt restoration of ocular alignment in new-onset incomitance.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.