961 resultados para Noise removal in images
Resumo:
BACKGROUND: The potential effects of ionizing radiation are of particular concern in children. The model-based iterative reconstruction VEO(TM) is a technique commercialized to improve image quality and reduce noise compared with the filtered back-projection (FBP) method. OBJECTIVE: To evaluate the potential of VEO(TM) on diagnostic image quality and dose reduction in pediatric chest CT examinations. MATERIALS AND METHODS: Twenty children (mean 11.4 years) with cystic fibrosis underwent either a standard CT or a moderately reduced-dose CT plus a minimum-dose CT performed at 100 kVp. Reduced-dose CT examinations consisted of two consecutive acquisitions: one moderately reduced-dose CT with increased noise index (NI = 70) and one minimum-dose CT at CTDIvol 0.14 mGy. Standard CTs were reconstructed using the FBP method while low-dose CTs were reconstructed using FBP and VEO. Two senior radiologists evaluated diagnostic image quality independently by scoring anatomical structures using a four-point scale (1 = excellent, 2 = clear, 3 = diminished, 4 = non-diagnostic). Standard deviation (SD) and signal-to-noise ratio (SNR) were also computed. RESULTS: At moderately reduced doses, VEO images had significantly lower SD (P < 0.001) and higher SNR (P < 0.05) in comparison to filtered back-projection images. Further improvements were obtained at minimum-dose CT. The best diagnostic image quality was obtained with VEO at minimum-dose CT for the small structures (subpleural vessels and lung fissures) (P < 0.001). The potential for dose reduction was dependent on the diagnostic task because of the modification of the image texture produced by this reconstruction. CONCLUSIONS: At minimum-dose CT, VEO enables important dose reduction depending on the clinical indication and makes visible certain small structures that were not perceptible with filtered back-projection.
Resumo:
This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science.
Resumo:
This paper presents a method to reconstruct 3D surfaces of silicon wafers from 2D images of printed circuits taken with a scanning electron microscope. Our reconstruction method combines the physical model of the optical acquisition system with prior knowledge about the shapes of the patterns in the circuit; the result is a shape-from-shading technique with a shape prior. The reconstruction of the surface is formulated as an optimization problem with an objective functional that combines a data-fidelity term on the microscopic image with two prior terms on the surface. The data term models the acquisition system through the irradiance equation characteristic of the microscope; the first prior is a smoothness penalty on the reconstructed surface, and the second prior constrains the shape of the surface to agree with the expected shape of the pattern in the circuit. In order to account for the variability of the manufacturing process, this second prior includes a deformation field that allows a nonlinear elastic deformation between the expected pattern and the reconstructed surface. As a result, the minimization problem has two unknowns, and the reconstruction method provides two outputs: 1) a reconstructed surface and 2) a deformation field. The reconstructed surface is derived from the shading observed in the image and the prior knowledge about the pattern in the circuit, while the deformation field produces a mapping between the expected shape and the reconstructed surface that provides a measure of deviation between the circuit design models and the real manufacturing process.
Resumo:
It is commonly regarded that the overuse of traffic control devices desensitizes drivers and leads to disrespect, especially for low-volume secondary roads with limited enforcement. The maintenance of traffic signs is also a tort liability concern, exacerbated by unnecessary signs. The Federal Highway Administration’s (FHWA) Manual on Uniform Traffic Control Devices (MUTCD) and the Institute of Transportation Engineer’s (ITE) Traffic Control Devices Handbook provide guidance for the implementation of STOP signs based on expected compliance with right-of-way rules, provision of through traffic flow, context (proximity to other controlled intersections), speed, sight distance, and crash history. The approach(es) to stop is left to engineering judgment and is usually dependent on traffic volume or functional class/continuity of system. Although presently being considered by the National Committee on Traffic Control Devices, traffic volume itself is not given as a criterion for implementation in the MUTCD. STOP signs have been installed at many locations for various reasons which no longer (or perhaps never) met engineering needs. If in fact the presence of STOP signs does not increase safety, removal should be considered. To date, however, no guidance exists for the removal of STOP signs at two-way stop-controlled intersections. The scope of this research is ultra-low-volume (< 150 daily entering vehicles) unpaved intersections in rural agricultural areas of Iowa, where each of the 99 counties may have as many as 300 or more STOP sign pairs. Overall safety performance is examined as a function of a county excessive use factor, developed specifically for this study and based on various volume ranges and terrain as a proxy for sight distance. Four conclusions are supported: (1) there is no statistical difference in the safety performance of ultra-low-volume stop-controlled and uncontrolled intersections for all drivers or for younger and older drivers (although interestingly, older drivers are underrepresented at both types of intersections); (2) compliance with stop control (as indicated by crash performance) does not appear to be affected by the use or excessive use of STOP signs, even when adjusted for volume and a sight distance proxy; (3) crash performance does not appear to be improved by the liberal use of stop control; (4) safety performance of uncontrolled intersections appears to decline relative to stop-controlled intersections above about 150 daily entering vehicles. Subject to adequate sight distance, traffic professionals may wish to consider removal of control below this threshold. The report concludes with a section on methods and legal considerations for safe removal of stop control.
Resumo:
This paper proposes an automatic hand detection system that combines the Fourier-Mellin Transform along with other computer vision techniques to achieve hand detection in cluttered scene color images. The proposed system uses the Fourier-Mellin Transform as an invariant feature extractor to perform RST invariant hand detection. In a first stage of the system a simple non-adaptive skin color-based image segmentation and an interest point detector based on corners are used in order to identify regions of interest that contains possible matches. A sliding window algorithm is then used to scan the image at different scales performing the FMT calculations only in the previously detected regions of interest and comparing the extracted FM descriptor of the windows with a hand descriptors database obtained from a train image set. The results of the performed experiments suggest the use of Fourier-Mellin invariant features as a promising approach for automatic hand detection.
Resumo:
This paper proposes an automatic hand detection system that combines the Fourier-Mellin Transform along with other computer vision techniques to achieve hand detection in cluttered scene color images. The proposed system uses the Fourier-Mellin Transform as an invariant feature extractor to perform RST invariant hand detection. In a first stage of the system a simple non-adaptive skin color-based image segmentation and an interest point detector based on corners are used in order to identify regions of interest that contains possible matches. A sliding window algorithm is then used to scan the image at different scales performing the FMT calculations only in the previously detected regions of interest and comparing the extracted FM descriptor of the windows with a hand descriptors database obtained from a train image set. The results of the performed experiments suggest the use of Fourier-Mellin invariant features as a promising approach for automatic hand detection.
Resumo:
Do our brains implicitly track the energetic content of the foods we see? Using electrical neuroimaging of visual evoked potentials (VEPs) we show that the human brain can rapidly discern food's energetic value, vis à vis its fat content, solely from its visual presentation. Responses to images of high-energy and low-energy food differed over two distinct time periods. The first period, starting at approximately 165 ms post-stimulus onset, followed from modulations in VEP topography and by extension in the configuration of the underlying brain network. Statistical comparison of source estimations identified differences distributed across a wide network including both posterior occipital regions and temporo-parietal cortices typically associated with object processing, and also inferior frontal cortices typically associated with decision-making. During a successive processing stage (starting at approximately 300 ms), responses differed both topographically and in terms of strength, with source estimations differing predominantly within prefrontal cortical regions implicated in reward assessment and decision-making. These effects occur orthogonally to the task that is actually being performed and suggest that reward properties such as a food's energetic content are treated rapidly and in parallel by a distributed network of brain regions involved in object categorization, reward assessment, and decision-making.
Resumo:
Impressive developments in X-ray imaging are associated with X-ray phase contrast computed tomography based on grating interferometry, a technique that provides increased contrast compared with conventional absorption-based imaging. A new "single-step" method capable of separating phase information from other contributions has been recently proposed. This approach not only simplifies data-acquisition procedures, but, compared with the existing phase step approach, significantly reduces the dose delivered to a sample. However, the image reconstruction procedure is more demanding than for traditional methods and new algorithms have to be developed to take advantage of the "single-step" method. In the work discussed in this paper, a fast iterative image reconstruction method named OSEM (ordered subsets expectation maximization) was applied to experimental data to evaluate its performance and range of applicability. The OSEM algorithm with different subsets was also characterized by comparison of reconstruction image quality and convergence speed. Computer simulations and experimental results confirm the reliability of this new algorithm for phase-contrast computed tomography applications. Compared with the traditional filtered back projection algorithm, in particular in the presence of a noisy acquisition, it furnishes better images at a higher spatial resolution and with lower noise. We emphasize that the method is highly compatible with future X-ray phase contrast imaging clinical applications.
Resumo:
Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.
Resumo:
The objective of this work was to evaluate the application of the spectral-temporal response surface (STRS) classification method on Moderate Resolution Imaging Spectroradiometer (MODIS, 250 m) sensor images in order to estimate soybean areas in Mato Grosso state, Brazil. The classification was carried out using the maximum likelihood algorithm (MLA) adapted to the STRS method. Thirty segments of 30x30 km were chosen along the main agricultural regions of Mato Grosso state, using data from the summer season of 2005/2006 (from October to March), and were mapped based on fieldwork data, TM/Landsat-5 and CCD/CBERS-2 images. Five thematic classes were considered: Soybean, Forest, Cerrado, Pasture and Bare Soil. The classification by the STRS method was done over an area intersected with a subset of 30x30-km segments. In regions with soybean predominance, STRS classification overestimated in 21.31% of the reference values. In regions where soybean fields were less prevalent, the classifier overestimated 132.37% in the acreage of the reference. The overall classification accuracy was 80%. MODIS sensor images and the STRS algorithm showed to be promising for the classification of soybean areas in regions with the predominance of large farms. However, the results for fragmented areas and smaller farms were less efficient, overestimating soybean areas.
Resumo:
We have studied sidebranching induced by fluctuations in dendritic growth. The amplitude of sidebranching induced by internal (equilibrium) concentration fluctuations in the case of solidification with solutal diffusion is computed. This amplitude turns out to be significantly smaller than values reported in previous experiments. The effects of other possible sources of fluctuations (of an external origin) are examined by introducing nonconserved noise in a phase-field model. This reproduces the characteristics of sidebranching found in experiments. Results also show that sidebranching induced by external noise is qualitatively similar to that of internal noise, and it is only distinguished by its amplitude.
Resumo:
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation‑based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi‑resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Among the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, have the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical‑based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.
Resumo:
Abstract