941 resultados para image noise modeling


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Techniques devoted to generating triangular meshes from intensity images either take as input a segmented image or generate a mesh without distinguishing individual structures contained in the image. These facts may cause difficulties in using such techniques in some applications, such as numerical simulations. In this work we reformulate a previously developed technique for mesh generation from intensity images called Imesh. This reformulation makes Imesh more versatile due to an unified framework that allows an easy change of refinement metric, rendering it effective for constructing meshes for applications with varied requirements, such as numerical simulation and image modeling. Furthermore, a deeper study about the point insertion problem and the development of geometrical criterion for segmentation is also reported in this paper. Meshes with theoretical guarantee of quality can also be obtained for each individual image structure as a post-processing step, a characteristic not usually found in other methods. The tests demonstrate the flexibility and the effectiveness of the approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new framework for generating triangular meshes from textured color images. The proposed framework combines a texture classification technique, called W-operator, with Imesh, a method originally conceived to generate simplicial meshes from gray scale images. An extension of W-operators to handle textured color images is proposed, which employs a combination of RGB and HSV channels and Sequential Floating Forward Search guided by mean conditional entropy criterion to extract features from the training data. The W-operator is built into the local error estimation used by Imesh to choose the mesh vertices. Furthermore, the W-operator also enables to assign a label to the triangles during the mesh construction, thus allowing to obtain a segmented mesh at the end of the process. The presented results show that the combination of W-operators with Imesh gives rise to a texture classification-based triangle mesh generation framework that outperforms pixel based methods. Crown Copyright (C) 2009 Published by Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a methodology for edge detection in digital images using the Canny detector, but associated with a priori edge structure focusing by a nonlinear anisotropic diffusion via the partial differential equation (PDE). This strategy aims at minimizing the effect of the well-known duality of the Canny detector, under which is not possible to simultaneously enhance the insensitivity to image noise and the localization precision of detected edges. The process of anisotropic diffusion via thePDE is used to a priori focus the edge structure due to its notable characteristic in selectively smoothing the image, leaving the homogeneous regions strongly smoothed and mainly preserving the physical edges, i.e., those that are actually related to objects presented in the image. The solution for the mentioned duality consists in applying the Canny detector to a fine gaussian scale but only along the edge regions focused by the process of anisotropic diffusion via the PDE. The results have shown that the method is appropriate for applications involving automatic feature extraction, since it allowed the high-precision localization of thinned edges, which are usually related to objects present in the image. © Nauka/Interperiodica 2006.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES] Se analizan las posibilidades del Image based modeling (IBM), como técnica de escaneado 3D de bajo coste para la modelización de inscripciones romanas, a partir del trabajo realizado en el Museo Arqueológico Nacional de Madrid sobre una amplia tipología de soportes epigráficos (piedra, bronce, arcilla), con resultados óptimos para la catalogación, estudio y difusión de este tipo de documentación histórica. Los resultados obtenidos permiten obtener inscripciones romanas en 3D que se pueden incorporar a los proyectos de epigrafía digital en curso, permitiendo su acceso a través de ordenadores y dispositivos móviles, sin coste añadido para los investigadores.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research aims at developing a framework for semantic-based digital survey of architectural heritage. Rooted in knowledge-based modeling which extracts mathematical constraints of geometry from architectural treatises, as-built information of architecture obtained from image-based modeling is integrated with the ideal model in BIM platform. The knowledge-based modeling transforms the geometry and parametric relation of architectural components from 2D printings to 3D digital models, and create large amount variations based on shape grammar in real time thanks to parametric modeling. It also provides prior knowledge for semantically segmenting unorganized survey data. The emergence of SfM (Structure from Motion) provides access to reconstruct large complex architectural scenes with high flexibility, low cost and full automation, but low reliability of metric accuracy. We solve this problem by combing photogrammetric approaches which consists of camera configuration, image enhancement, and bundle adjustment, etc. Experiments show the accuracy of image-based modeling following our workflow is comparable to that from range-based modeling. We also demonstrate positive results of our optimized approach in digital reconstruction of portico where low-texture-vault and dramatical transition of illumination bring huge difficulties in the workflow without optimization. Once the as-built model is obtained, it is integrated with the ideal model in BIM platform which allows multiple data enrichment. In spite of its promising prospect in AEC industry, BIM is developed with limited consideration of reverse-engineering from survey data. Besides representing the architectural heritage in parallel ways (ideal model and as-built model) and comparing their difference, we concern how to create as-built model in BIM software which is still an open area to be addressed. The research is supposed to be fundamental for research of architectural history, documentation and conservation of architectural heritage, and renovation of existing buildings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid technical advances in computed tomography have led to an increased number of clinical indications. Unfortunately, at the same time the radiation exposure to the population has also increased due to the increased total number of CT examinations. In the last few years various publications have demonstrated the feasibility of radiation dose reduction for CT examinations with no compromise in image quality and loss in interpretation accuracy. The majority of the proposed methods for dose optimization are easy to apply and are independent of the detector array configuration. This article reviews indication-dependent principles (e.g. application of reduced tube voltage for CT angiography, selection of the collimation and the pitch, reducing the total number of imaging series, lowering the tube voltage and tube current for non-contrast CT scans), manufacturer-dependent principles (e.g. accurate application of automatic modulation of tube current, use of adaptive image noise filter and use of iterative image reconstruction) and general principles (e.g. appropriate patient-centering in the gantry, avoiding over-ranging of the CT scan, lowering the tube voltage and tube current for survey CT scans) which lead to radiation dose reduction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this study was to evaluate in a phantom study the effect of patient size on radiation dose for abdominal MDCT with automatic tube current modulation. MATERIALS AND METHODS: One or two 4-cm-thick circumferential layers of fat-equivalent material were added to the abdomen of an anthropomorphic phantom to simulate patients of three sizes: small (cross-sectional dimensions, 18 x 22 cm), average size (26 x 30 cm), and oversize (34 x 38 cm). Imaging was performed with a 64-MDCT scanner with combined z-axis and xy-axis tube current modulation according to two protocols: protocol A had a noise index of 12.5 H, and protocol B, 15.0 H. Radiation doses to three abdominal organs and the skin were assessed. Image noise also was measured. RESULTS: Despite increasing patient size, the image noise measured was similar for protocol A (range, 11.7-12.2 H) and protocol B (range, 13.9-14.8 H) (p > 0.05). With the two protocols, in comparison with the dose of the small patient, the abdominal organ doses of the average-sized patient and the oversized patient increased 161.5-190.6%and 426.9-528.1%, respectively (p < 0.001). The skin dose increased as much as 268.6% for the average-sized patient and 816.3% for the oversized patient compared with the small patient (p < 0.001). CONCLUSION: Oversized patients undergoing abdominal MDCT with tube current modulation receive significantly higher doses than do small patients. The noise index needs to be adjusted to the body habitus to ensure dose efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We aimed at assessing stent geometry and in-stent contrast attenuation with 64-slice CT in patients with various coronary stents. Twenty-nine patients (mean age 60 +/- 11 years; 24 men) with 50 stents underwent CT within 2 weeks after stent placement. Mean in-stent luminal diameter and reference vessel diameter proximal and distal to the stent were assessed with CT, and compared to quantitative coronary angiography (QCA). Stent length was also compared to the manufacturer's values. Images were reconstructed using a medium-smooth (B30f) and sharp (B46f) kernel. All 50 stents could be visualized with CT. Mean in-stent luminal diameter was systematically underestimated with CT compared to QCA (1.60 +/- 0.39 mm versus 2.49 +/- 0.45 mm; P < 0.0001), resulting in a modest correlation of QCA versus CT (r = 0.49; P < 0.0001). Stent length as given by the manufacturer was 18.2 +/- 6.2 mm, correlating well with CT (18.5 +/- 5.7 mm; r = 0.95; P < 0.0001) and QCA (17.4 +/- 5.6 mm; r = 0.87; P < 0.0001). Proximal and distal reference vessel diameters were similar with CT and QCA (P = 0.06 and P = 0.03). B46f kernel images showed higher image noise (P < 0.05) and lower in-stent CT attenuation values (P < 0.001) than images reconstructed with the B30f kernel. 64-slice CT allows measurement of coronary artery in-stent density, and significantly underestimates the true in-stent diameter compared to QCA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To prospectively evaluate, for the depiction of simulated hypervascular liver lesions in a phantom, the effect of a low tube voltage, high tube current computed tomographic (CT) technique on image noise, contrast-to-noise ratio (CNR), lesion conspicuity, and radiation dose. MATERIALS AND METHODS: A custom liver phantom containing 16 cylindric cavities (four cavities each of 3, 5, 8, and 15 mm in diameter) filled with various iodinated solutions to simulate hypervascular liver lesions was scanned with a 64-section multi-detector row CT scanner at 140, 120, 100, and 80 kVp, with corresponding tube current-time product settings at 225, 275, 420, and 675 mAs, respectively. The CNRs for six simulated lesions filled with different iodinated solutions were calculated. A figure of merit (FOM) for each lesion was computed as the ratio of CNR2 to effective dose (ED). Three radiologists independently graded the conspicuity of 16 simulated lesions. An anthropomorphic phantom was scanned to evaluate the ED. Statistical analysis included one-way analysis of variance. RESULTS: Image noise increased by 45% with the 80-kVp protocol compared with the 140-kVp protocol (P < .001). However, the lowest ED and the highest CNR were achieved with the 80-kVp protocol. The FOM results indicated that at a constant ED, a reduction of tube voltage from 140 to 120, 100, and 80 kVp increased the CNR by factors of at least 1.6, 2.4, and 3.6, respectively (P < .001). At a constant CNR, corresponding reductions in ED were by a factor of 2.5, 5.5, and 12.7, respectively (P < .001). The highest lesion conspicuity was achieved with the 80-kVp protocol. CONCLUSION: The CNR of simulated hypervascular liver lesions can be substantially increased and the radiation dose reduced by using an 80-kVp, high tube current CT technique.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multi-scale, multi-physics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlasbased segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the modelling and validation of an evolvable hardware architecture which can be mapped on a 2D systolic structure implemented on commercial reconfigurable FPGAs. The adaptation capabilities of the architecture are exercised to validate its evolvability. The underlying proposal is the use of a library of reconfigurable components characterised by their partial bitstreams, which are used by the Evolutionary Algorithm to find a solution to a given task. Evolution of image noise filters is selected as the proof of concept application. Results show that computation speed of the resulting evolved circuit is higher than with the Virtual Reconfigurable Circuits approach, and this can be exploited on the evolution process by using dynamic reconfiguration

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work was to investigate human contrast perception at various contrast levels ranging from detection threshold to suprathreshold levels by using psychophysical techniques. The work consists of two major parts. The first part deals with contrast matching, and the second part deals with contrast discrimination. Contrast matching technique was used to determine when the perceived contrasts of different stimuli were equal. The effects of spatial frequency, stimulus area, image complexity and chromatic contrast on contrast detection thresholds and matches were studied. These factors influenced detection thresholds and perceived contrast at low contrast levels. However, at suprathreshold contrast levels perceived contrast became directly proportional to the physical contrast of the stimulus and almost independent of factors affecting detection thresholds. Contrast discrimination was studied by measuring contrast increment thresholds which indicate the smallest detectable contrast difference. The effects of stimulus area, external spatial image noise and retinal illuminance were studied. The above factors affected contrast detection thresholds and increment thresholds measured at low contrast levels. At high contrast levels, contrast increment thresholds became very similar so that the effect of these factors decreased. Human contrast perception was modelled by regarding the visual system as a simple image processing system. A visual signal is first low-pass filtered by the ocular optics. This is followed by spatial high-pass filtering by the neural visual pathways, and addition of internal neural noise. Detection is mediated by a local matched filter which is a weighted replica of the stimulus whose sampling efficiency decreases with increasing stimulus area and complexity. According to the model, the signals to be compared in a contrast matching task are first transferred through the early image processing stages mentioned above. Then they are filtered by a restoring transfer function which compensates for the low-level filtering and limited spatial integration at high contrast levels. Perceived contrasts of the stimuli are equal when the restored responses to the stimuli are equal. According to the model, the signals to be discriminated in a contrast discrimination task first go through the early image processing stages, after which signal dependent noise is added to the matched filter responses. The decision made by the human brain is based on the comparison between the responses of the matched filters to the stimuli, and the accuracy of the decision is limited by pre- and post-filter noises. The model for human contrast perception could accurately describe the results of contrast matching and discrimination in various conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.