997 resultados para Image simulations
Resumo:
Objective:To evaluate the evolution of mammographic image quality in the state of Rio de Janeiro on the basis of parameters measured and analyzed during health surveillance inspections in the period from 2006 to 2011.Materials and Methods:Descriptive study analyzing parameters connected with imaging quality of 52 mammography apparatuses inspected at least twice with a one-year interval.Results:Amongst the 16 analyzed parameters, 7 presented more than 70% of conformity, namely: compression paddle pressure intensity (85.1%), films development (72.7%), film response (72.7%), low contrast fine detail (92.2%), tumor mass visualization (76.5%), absence of image artifacts (94.1%), mammography-specific developers availability (88.2%). On the other hand, relevant parameters were below 50% conformity, namely: monthly image quality control testing (28.8%) and high contrast details with respect to microcalcifications visualization (47.1%).Conclusion:The analysis revealed critical situations in terms of compliance with the health surveillance standards. Priority should be given to those mammography apparatuses that remained non-compliant at the second inspection performed within the one-year interval.
Resumo:
OBJECTIVE: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. METHODS: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. RESULTS: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). CONCLUSION: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. ADVANCES IN KNOWLEDGE: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.
Resumo:
Forensic intelligence has recently gathered increasing attention as a potential expansion of forensic science that may contribute in a wider policing and security context. Whilst the new avenue is certainly promising, relatively few attempts to incorporate models, methods and techniques into practical projects are reported. This work reports a practical application of a generalised and transversal framework for developing forensic intelligence processes referred to here as the Transversal model adapted from previous work. Visual features present in the images of four datasets of false identity documents were systematically profiled and compared using image processing for the detection of a series of modus operandi (M.O.) actions. The nature of these series and their relation to the notion of common source was evaluated with respect to alternative known information and inferences drawn regarding respective crime systems. 439 documents seized by police and border guard authorities across 10 jurisdictions in Switzerland with known and unknown source level links formed the datasets for this study. Training sets were developed based on both known source level data, and visually supported relationships. Performance was evaluated through the use of intra-variability and inter-variability scores drawn from over 48,000 comparisons. The optimised method exhibited significant sensitivity combined with strong specificity and demonstrates its ability to support forensic intelligence efforts.
Resumo:
Postprint (published version)
Resumo:
An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method
Resumo:
We report a Lattice-Boltzmann scheme that accounts for adsorption and desorption in the calculation of mesoscale dynamical properties of tracers in media of arbitrary complexity. Lattice Boltzmann simulations made it possible to solve numerically the coupled Navier-Stokes equations of fluid dynamics and Nernst-Planck equations of electrokinetics in complex, heterogeneous media. With the moment propagation scheme, it became possible to extract the effective diffusion and dispersion coefficients of tracers, or solutes, of any charge, e.g., in porous media. Nevertheless, the dynamical properties of tracers depend on the tracer-surface affinity, which is not purely electrostatic and also includes a species-specific contribution. In order to capture this important feature, we introduce specific adsorption and desorption processes in a lattice Boltzmann scheme through a modified moment propagation algorithm, in which tracers may adsorb and desorb from surfaces through kinetic reaction rates. The method is validated on exact results for pure diffusion and diffusion-advection in Poiseuille flows in a simple geometry. We finally illustrate the importance of taking such processes into account in the time-dependent diffusion coefficient in a more complex porous medium.
Resumo:
Coating and filler pigments have strong influence to the properties of the paper. Filler content can be even over 30 % and pigment content in coating is about 85-95 weight percent. The physical and chemical properties of the pigments are different and the knowledge of these properties is important for optimising of optical and printing properties of the paper. The size and shape of pigment particles can be measured by different analysers which can be based on sedimentation, laser diffraction, changes in electric field etc. In this master's thesis was researched particle properties especially by scanning electron microscope (SEM) and image analysis programs. Research included nine pigments with different particle size and shape. Pigments were analysed by two image analysis programs (INCA Feature and Poikki), Coulter LS230 (laser diffraction) and SediGraph 5100 (sedimentation). The results were compared to perceive the effect of particle shape to the performance of the analysers. Only image analysis programs gave parameters of the particle shape. One part of research was also the sample preparation for SEM. Individual particles should be separated and distinct in ideal sample. Analysing methods gave different results but results from image analysis programs corresponded even to sedimentation or to laser diffraction depending on the particle shape. Detailed analysis of the particle shape required high magnification in SEM, but measured parameters described very well the shape of the particles. Large particles (ecd~1 µm) could be used also in 3D-modelling which enabled the measurement of the thickness of the particles. Scanning electron microscope and image analysis programs were effective and multifunctional tools for particle analyses. Development and experience will devise the usability of analysing method in routine use.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
The purpose of gamma spectrometry and gamma and X-ray tomography of nuclear fuel is to determine both radionuclide concentration and integrity and deformation of nuclear fuel. The aims of this thesis have been to find out the basics of gamma spectrometry and tomography of nuclear fuel, to find out the operational mechanisms of gamma spectrometry and tomography equipment of nuclear fuel, and to identify problems that relate to these measurement techniques. In gamma spectrometry of nuclear fuel the gamma-ray flux emitted from unstable isotopes is measured using high-resolution gamma-ray spectroscopy. The production of unstable isotopes correlates with various physical fuel parameters. In gamma emission tomography the gamma-ray spectrum of irradiated nuclear fuel is recorded for several projections. In X-ray transmission tomography of nuclear fuel a radiation source emits a beam and the intensity, attenuated by the nuclear fuel, is registered by the detectors placed opposite. When gamma emission or X-ray transmission measurements are combined with tomographic image reconstruction methods, it is possible to create sectional images of the interior of nuclear fuel. MODHERATO is a computer code that simulates the operation of radioscopic or tomographic devices and it is used to predict and optimise the performance of imaging systems. Related to the X-ray tomography, MODHERATO simulations have been performed by the author. Gamma spectrometry and gamma and X-ray tomography are promising non-destructive examination methods for understanding fuel behaviour under normal, transient and accident conditions.
Resumo:
Large Hadron Collider (LHC) is the main particle accelerator at CERN. LHC is created with main goal to search elementary particles and help science investigate our universe. Radiation in LHC is caused by charged particles circular acceleration, therefore detectors tracing particles in existed severe conditions during the experiments must be radiation tolerant. Moreover, further upgrade of luminosity (up to 1035 cm-2s-1) requires development of particle detector’s structure. This work is dedicated to show the new type 3D stripixel detector with serious structural improvement. The new type of radiation-hard detector has a three-dimensional (3D) array of the p+ and n+ electrodes that penetrate into the detector bulk. The electrons and holes are then collected at oppositely biased electrodes. Proposed 3D stripixel detector demonstrates that full depletion voltage is lower that that for planar detectors. Low depletion voltage is one of the main advantages because only depleted part of the device is active are. Because of small spacing between electrodes, charge collection distances are smaller which results in high speed of the detector’s response. In this work is also briefly discussed dual-column type detectors, meaning consisting both n+ and p+ type columnar electrodes in its structure, and was declared that dual-column detectors show better electric filed distribution then single sided radiation detectors. The dead space or in other words low electric field region in significantly suppressed. Simulations were carried out by using Atlas device simulation software. As a simulation results in this work are represented the electric field distribution under different bias voltages.
Resumo:
Multispectral images are becoming more common in the field of remote sensing, computer vision, and industrial applications. Due to the high accuracy of the multispectral information, it can be used as an important quality factor in the inspection of industrial products. Recently, the development on multispectral imaging systems and the computational analysis on the multispectral images have been the focus of a growing interest. In this thesis, three areas of multispectral image analysis are considered. First, a method for analyzing multispectral textured images was developed. The method is based on a spectral cooccurrence matrix, which contains information of the joint distribution of spectral classes in a spectral domain. Next, a procedure for estimating the illumination spectrum of the color images was developed. Proposed method can be used, for example, in color constancy, color correction, and in the content based search from color image databases. Finally, color filters for the optical pattern recognition were designed, and a prototype of a spectral vision system was constructed. The spectral vision system can be used to acquire a low dimensional component image set for the two dimensional spectral image reconstruction. The data obtained by the spectral vision system is small and therefore convenient for storing and transmitting a spectral image.
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
Robotic platforms have advanced greatly in terms of their remote sensing capabilities, including obtaining optical information using cameras. Alongside these advances, visual mapping has become a very active research area, which facilitates the mapping of areas inaccessible to humans. This requires the efficient processing of data to increase the final mosaic quality and computational efficiency. In this paper, we propose an efficient image mosaicing algorithm for large area visual mapping in underwater environments using multiple underwater robots. Our method identifies overlapping image pairs in the trajectories carried out by the different robots during the topology estimation process, being this a cornerstone for efficiently mapping large areas of the seafloor. We present comparative results based on challenging real underwater datasets, which simulated multi-robot mapping
Resumo:
Quickremovalofbiosolidsinaquaculturefacilities,andspeciallyinrecirculatingaquaculturesystems(RAS),isoneofthemostimportantstepinwastemanagement.Sedimentationdynamicsofbiosolidsinanaquaculturetankwilldeterminetheiraccumulationatthebottomofthetank.