977 resultados para Image Reconstruction
Resumo:
NlmCategory="UNASSIGNED">A version of cascaded systems analysis was developed specifically with the aim of studying quantum noise propagation in x-ray detectors. Signal and quantum noise propagation was then modelled in four types of x-ray detectors used for digital mammography: four flat panel systems, one computed radiography and one slot-scan silicon wafer based photon counting device. As required inputs to the model, the two dimensional (2D) modulation transfer function (MTF), noise power spectra (NPS) and detective quantum efficiency (DQE) were measured for six mammography systems that utilized these different detectors. A new method to reconstruct anisotropic 2D presampling MTF matrices from 1D radial MTFs measured along different angular directions across the detector is described; an image of a sharp, circular disc was used for this purpose. The effective pixel fill factor for the FP systems was determined from the axial 1D presampling MTFs measured with a square sharp edge along the two orthogonal directions of the pixel lattice. Expectation MTFs were then calculated by averaging the radial MTFs over all possible phases and the 2D EMTF formed with the same reconstruction technique used for the 2D presampling MTF. The quantum NPS was then established by noise decomposition from homogenous images acquired as a function of detector air kerma. This was further decomposed into the correlated and uncorrelated quantum components by fitting the radially averaged quantum NPS with the radially averaged EMTF(2). This whole procedure allowed a detailed analysis of the influence of aliasing, signal and noise decorrelation, x-ray capture efficiency and global secondary gain on NPS and detector DQE. The influence of noise statistics, pixel fill factor and additional electronic and fixed pattern noises on the DQE was also studied. The 2D cascaded model and decompositions performed on the acquired images also enlightened the observed quantum NPS and DQE anisotropy.
Resumo:
The tourism image is an element that conditions the competitiveness of tourism destinations by making them stand out in the minds of tourists. In this context, marketers of tourism destinations endeavour to create an induced image based on their identity and distinctive characteristics.A number of authors have also recognized the complexity of tourism destinations and the need for coordination and cooperation among all tourism agents, in order to supply a satisfactory tourist product and be competitive in the tourism market. Therefore, tourism agents at the destination need to develop and integrate strategic marketing plans.The aim of this paper is to determine how cities of similar cultures use their resources with the purpose of developing a distinctive induced tourism image to attract tourists and the extent of coordination and cooperation among the various tourism agents of a destination in the process of induced image creation.In order to accomplish these aims, a comparative analysis of the induced image of two cultural cities is presented, Girona (Spain) and Perpignan (France). The induced image is assessed through the content analysis of promotional brochures and the extent of cooperation with in-depth interviews of the main tourism agents of these destinations.Despite the similarities of both cities in terms of tourism resources, results show the use of different attributes to configure the induced image of each destination, as well as a different configuration of the network of tourism agents that participate in the process of induced image creation
Resumo:
Study design: A retrospective study of image guided cervical implant placement precision. Objective: To describe a simple and precise classification of cervical critical screw placement. Summary of Background Data: "Critical" screw placement is defined as implant insertion into a bone corridor which is surrounded circumferentially by neurovascular structures. While the use of image guidance has improved accuracy, there is currently no classification which provides sufficient precision to assess the navigation success of critical cervical screw placement. Methods: Based on postoperative clinical evaluation and CT imaging, the orthogonal view evaluation method (OVEM) is used to classify screw accuracy into grade I (no cortical breach), grade la (screw thread cortical breach), grade II (internal diameter cortical breach) and grade III (major cortical breach causing neural or vascular injury). Grades II and III are considered to be navigation failures, after accounting for bone corridor / screw mismatch (minimal diameter of targeted bone corridor being smaller than an outer screw diameter). Results: A total of 276 screws from 91 patients were classified into grade I (64.9%), grade la (18.1%), and grade II (17.0%). No grade III screw was observed. The overall rate of navigation failure was 13%. Multiple logistic regression indicated that navigational failure was significantly associated with the level of instrumentation and the navigation system used. Navigational failure was rare (1.6%) when the margin around the screw in the bone corridor was larger than 1.5 mm. Conclusions: OVEM evaluation appears to be a useful tool to assess the precision of critical screw placement in the cervical spine. The OVEM validity and reliability need to be addressed. Further correlation with clinical outcomes will be addressed in future studies.
Resumo:
This article analyzes the implications of worker overestimation of productivity for firms in which incentives take the form of tournaments. Each worker overestimates his productivity but is aware of the bias in his opponent's self-assessment. The manager of the firm, on the other hand, correctly assesses workers' productivities and self-beliefs when setting tournament prizes. The article shows that, under a variety of circumstances, firms can benefit from worker positive self-image. The article also shows that worker positive self-image can improve welfare in tournaments. In contrast, workers' utility declines due to their own misguided choices.
Resumo:
Objective:To evaluate the evolution of mammographic image quality in the state of Rio de Janeiro on the basis of parameters measured and analyzed during health surveillance inspections in the period from 2006 to 2011.Materials and Methods:Descriptive study analyzing parameters connected with imaging quality of 52 mammography apparatuses inspected at least twice with a one-year interval.Results:Amongst the 16 analyzed parameters, 7 presented more than 70% of conformity, namely: compression paddle pressure intensity (85.1%), films development (72.7%), film response (72.7%), low contrast fine detail (92.2%), tumor mass visualization (76.5%), absence of image artifacts (94.1%), mammography-specific developers availability (88.2%). On the other hand, relevant parameters were below 50% conformity, namely: monthly image quality control testing (28.8%) and high contrast details with respect to microcalcifications visualization (47.1%).Conclusion:The analysis revealed critical situations in terms of compliance with the health surveillance standards. Priority should be given to those mammography apparatuses that remained non-compliant at the second inspection performed within the one-year interval.
Resumo:
OBJECTIVE: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. METHODS: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. RESULTS: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). CONCLUSION: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. ADVANCES IN KNOWLEDGE: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.
Resumo:
Forensic intelligence has recently gathered increasing attention as a potential expansion of forensic science that may contribute in a wider policing and security context. Whilst the new avenue is certainly promising, relatively few attempts to incorporate models, methods and techniques into practical projects are reported. This work reports a practical application of a generalised and transversal framework for developing forensic intelligence processes referred to here as the Transversal model adapted from previous work. Visual features present in the images of four datasets of false identity documents were systematically profiled and compared using image processing for the detection of a series of modus operandi (M.O.) actions. The nature of these series and their relation to the notion of common source was evaluated with respect to alternative known information and inferences drawn regarding respective crime systems. 439 documents seized by police and border guard authorities across 10 jurisdictions in Switzerland with known and unknown source level links formed the datasets for this study. Training sets were developed based on both known source level data, and visually supported relationships. Performance was evaluated through the use of intra-variability and inter-variability scores drawn from over 48,000 comparisons. The optimised method exhibited significant sensitivity combined with strong specificity and demonstrates its ability to support forensic intelligence efforts.
Resumo:
Postprint (published version)
Resumo:
An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method
Resumo:
Coating and filler pigments have strong influence to the properties of the paper. Filler content can be even over 30 % and pigment content in coating is about 85-95 weight percent. The physical and chemical properties of the pigments are different and the knowledge of these properties is important for optimising of optical and printing properties of the paper. The size and shape of pigment particles can be measured by different analysers which can be based on sedimentation, laser diffraction, changes in electric field etc. In this master's thesis was researched particle properties especially by scanning electron microscope (SEM) and image analysis programs. Research included nine pigments with different particle size and shape. Pigments were analysed by two image analysis programs (INCA Feature and Poikki), Coulter LS230 (laser diffraction) and SediGraph 5100 (sedimentation). The results were compared to perceive the effect of particle shape to the performance of the analysers. Only image analysis programs gave parameters of the particle shape. One part of research was also the sample preparation for SEM. Individual particles should be separated and distinct in ideal sample. Analysing methods gave different results but results from image analysis programs corresponded even to sedimentation or to laser diffraction depending on the particle shape. Detailed analysis of the particle shape required high magnification in SEM, but measured parameters described very well the shape of the particles. Large particles (ecd~1 µm) could be used also in 3D-modelling which enabled the measurement of the thickness of the particles. Scanning electron microscope and image analysis programs were effective and multifunctional tools for particle analyses. Development and experience will devise the usability of analysing method in routine use.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
Describes a method to code a decimated model of an isosurface on an octree representation while maintaining volume data if it is needed. The proposed technique is based on grouping the marching cubes (MC) patterns into five configurations according the topology and the number of planes of the surface that are contained in a cell. Moreover, the discrete number of planes on which the surface lays is fixed. Starting from a complete volume octree, with the isosurface codified at terminal nodes according to the new configuration, a bottom-up strategy is taken for merging cells. Such a strategy allows one to implicitly represent co-planar faces in the upper octree levels without introducing any error. At the end of this merging process, when it is required, a reconstruction strategy is applied to generate the surface contained in the octree intersected leaves. Some examples with medical data demonstrate that a reduction of up to 50% in the number of polygons can be achieved