977 resultados para image noise modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To develop and evaluate a practical method for the quantification of signal-to-noise ratio (SNR) on coronary MR angiograms (MRA) acquired with parallel imaging.Materials and Methods: To quantify the spatially varying noise due to parallel imaging reconstruction, a new method has been implemented incorporating image data acquisition followed by a fast noise scan during which radio-frequency pulses, cardiac triggering and navigator gating are disabled. The performance of this method was evaluated in a phantom study where SNR measurements were compared with those of a reference standard (multiple repetitions). Subsequently, SNR of myocardium and posterior skeletal muscle was determined on in vivo human coronary MRA.Results: In a phantom, the SNR measured using the proposed method deviated less than 10.1% from the reference method for small geometry factors (<= 2). In vivo, the noise scan for a 10 min coronary MRA acquisition was acquired in 30 s. Higher signal and lower SNR, due to spatially varying noise, were found in myocardium compared with posterior skeletal muscle.Conclusion: SNR quantification based on a fast noise scan is a validated and easy-to-use method when applied to three-dimensional coronary MRA obtained with parallel imaging as long as the geometry factor remains low.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: We are interested in the numerical simulation of the anastomotic region comprised between outflow canula of LVAD and the aorta. Segmenta¬tion, geometry reconstruction and grid generation from patient-specific data remain an issue because of the variable quality of DICOM images, in particular CT-scan (e.g. metallic noise of the device, non-aortic contrast phase). We pro¬pose a general framework to overcome this problem and create suitable grids for numerical simulations.Methods: Preliminary treatment of images is performed by reducing the level window and enhancing the contrast of the greyscale image using contrast-limited adaptive histogram equalization. A gradient anisotropic diffusion filter is applied to reduce the noise. Then, watershed segmentation algorithms and mathematical morphology filters allow reconstructing the patient geometry. This is done using the InsightToolKit library (www.itk.org). Finally the Vascular Model¬ing ToolKit (www.vmtk.org) and gmsh (www.geuz.org/gmsh) are used to create the meshes for the fluid (blood) and structure (arterial wall, outflow canula) and to a priori identify the boundary layers. The method is tested on five different patients with left ventricular assistance and who underwent a CT-scan exam.Results: This method produced good results in four patients. The anastomosis area is recovered and the generated grids are suitable for numerical simulations. In one patient the method failed to produce a good segmentation because of the small dimension of the aortic arch with respect to the image resolution.Conclusions: The described framework allows the use of data that could not be otherwise segmented by standard automatic segmentation tools. In particular the computational grids that have been generated are suitable for simulations that take into account fluid-structure interactions. Finally the presented method features a good reproducibility and fast application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When preparing an article on image restoration in astronomy, it is obvious that some topics have to be dropped to keep the work at reasonable length. We have decided to concentrate on image and noise models and on the algorithms to find the restoration. Topics like parameter estimation and stopping rules are also commented on. We start by describing the Bayesian paradigm and then proceed to study the noise and blur models used by the astronomical community. Then the prior models used to restore astronomical images are examined. We describe the algorithms used to find the restoration for the most common combinations of degradation and image models. Then we comment on important issues such as acceleration of algorithms, stopping rules, and parameter estimation. We also comment on the huge amount of information available to, and made available by, the astronomical community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate the effect of a real-time adaptive trigger delay on image quality to correct for heart rate variability in 3D whole-heart coronary MR angiography (MRA). MATERIALS AND METHODS: Twelve healthy adults underwent 3D whole-heart coronary MRA with and without the use of an adaptive trigger delay. The moment of minimal coronary artery motion was visually determined on a high temporal resolution MRI. Throughout the scan performed without adaptive trigger delay, trigger delay was kept constant, whereas during the scan performed with adaptive trigger delay, trigger delay was continuously updated after each RR-interval using physiological modeling. Signal-to-noise, contrast-to-noise, vessel length, vessel sharpness, and subjective image quality were compared in a blinded manner. RESULTS: Vessel sharpness improved significantly for the middle segment of the right coronary artery (RCA) with the use of the adaptive trigger delay (52.3 +/- 7.1% versus 48.9 +/- 7.9%, P = 0.026). Subjective image quality was significantly better in the middle segments of the RCA and left anterior descending artery (LAD) when the scan was performed with adaptive trigger delay compared to constant trigger delay. CONCLUSION: Our results demonstrate that the use of an adaptive trigger delay to correct for heart rate variability improves image quality mainly in the middle segments of the RCA and LAD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The potential effects of ionizing radiation are of particular concern in children. The model-based iterative reconstruction VEO(TM) is a technique commercialized to improve image quality and reduce noise compared with the filtered back-projection (FBP) method. OBJECTIVE: To evaluate the potential of VEO(TM) on diagnostic image quality and dose reduction in pediatric chest CT examinations. MATERIALS AND METHODS: Twenty children (mean 11.4 years) with cystic fibrosis underwent either a standard CT or a moderately reduced-dose CT plus a minimum-dose CT performed at 100 kVp. Reduced-dose CT examinations consisted of two consecutive acquisitions: one moderately reduced-dose CT with increased noise index (NI = 70) and one minimum-dose CT at CTDIvol 0.14 mGy. Standard CTs were reconstructed using the FBP method while low-dose CTs were reconstructed using FBP and VEO. Two senior radiologists evaluated diagnostic image quality independently by scoring anatomical structures using a four-point scale (1 = excellent, 2 = clear, 3 = diminished, 4 = non-diagnostic). Standard deviation (SD) and signal-to-noise ratio (SNR) were also computed. RESULTS: At moderately reduced doses, VEO images had significantly lower SD (P < 0.001) and higher SNR (P < 0.05) in comparison to filtered back-projection images. Further improvements were obtained at minimum-dose CT. The best diagnostic image quality was obtained with VEO at minimum-dose CT for the small structures (subpleural vessels and lung fissures) (P < 0.001). The potential for dose reduction was dependent on the diagnostic task because of the modification of the image texture produced by this reconstruction. CONCLUSIONS: At minimum-dose CT, VEO enables important dose reduction depending on the clinical indication and makes visible certain small structures that were not perceptible with filtered back-projection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Imaging during a period of minimal myocardial motion is of paramount importance for coronary MR angiography (MRA). The objective of our study was to evaluate the utility of FREEZE, a custom-built automated tool for the identification of the period of minimal myocardial motion, in both a moving phantom at 1.5 T and 10 healthy adults (nine men, one woman; mean age, 24.9 years; age range, 21-32 years) at 3 T. CONCLUSION: Quantitative analysis of the moving phantom showed that dimension measurements approached those obtained in the static phantom when using FREEZE. In vitro, vessel sharpness, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were significantly improved when coronary MRA was performed during the software-prescribed period of minimal myocardial motion (p < 0.05). Consistent with these objective findings, image quality assessments by consensus review also improved significantly when using the automated prescription of the period of minimal myocardial motion. The use of FREEZE improves image quality of coronary MRA. Simultaneously, operator dependence can be minimized while the ease of use is improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: EOS (EOS imaging S.A, Paris, France) is an x-ray imaging system that uses slot-scanning technology in order to optimize the trade-off between image quality and dose. The goal of this study was to characterize the EOS system in terms of occupational exposure, organ doses to patients as well as image quality for full spine examinations. METHODS: Occupational exposure was determined by measuring the ambient dose equivalents in the radiological room during a standard full spine examination. The patient dosimetry was performed using anthropomorphic phantoms representing an adolescent and a five-year-old child. The organ doses were measured with thermoluminescent detectors and then used to calculate effective doses. Patient exposure with EOS was then compared to dose levels reported for conventional radiological systems. Image quality was assessed in terms of spatial resolution and different noise contributions to evaluate the detector's performances of the system. The spatial-frequency signal transfer efficiency of the imaging system was quantified by the detective quantum efficiency (DQE). RESULTS: The use of a protective apron when the medical staff or parents have to stand near to the cubicle in the radiological room is recommended. The estimated effective dose to patients undergoing a full spine examination with the EOS system was 290μSv for an adult and 200 μSv for a child. MTF and NPS are nonisotropic, with higher values in the scanning direction; they are in addition energy-dependent, but scanning speed independent. The system was shown to be quantum-limited, with a maximum DQE of 13%. The relevance of the DQE for slot-scanning system has been addressed. CONCLUSIONS: As a summary, the estimated effective dose was 290μSv for an adult; the image quality remains comparable to conventional systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of synthetic aperture radar interferometric phase noise reduction is addressed. A new technique based on discrete wavelet transforms is presented. This technique guarantees high resolution phase estimation without using phase image segmentation. Areas containing only noise are hardly processed. Tests with synthetic and real interferograms are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image filtering is a highly demanded approach of image enhancement in digital imaging systems design. It is widely used in television and camera design technologies to improve the quality of an output image to avoid various problems such as image blurring problem thatgains importance in design of displays of large sizes and design of digital cameras. This thesis proposes a new image filtering method basedon visual characteristics of human eye such as MTF. In contrast to the traditional filtering methods based on human visual characteristics this thesis takes into account the anisotropy of the human eye vision. The proposed method is based on laboratory measurements of the human eye MTF and takes into account degradation of the image by the latter. This method improves an image in the way it will be degraded by human eye MTF to give perception of the original image quality. This thesis gives a basic understanding of an image filtering approach and the concept of MTF and describes an algorithm to perform an image enhancement based on MTF of human eye. Performed experiments have shown quite good results according to human evaluation. Suggestions to improve the algorithm are also given for the future improvements.