992 resultados para Indirect Image Orientation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective:To evaluate the evolution of mammographic image quality in the state of Rio de Janeiro on the basis of parameters measured and analyzed during health surveillance inspections in the period from 2006 to 2011.Materials and Methods:Descriptive study analyzing parameters connected with imaging quality of 52 mammography apparatuses inspected at least twice with a one-year interval.Results:Amongst the 16 analyzed parameters, 7 presented more than 70% of conformity, namely: compression paddle pressure intensity (85.1%), films development (72.7%), film response (72.7%), low contrast fine detail (92.2%), tumor mass visualization (76.5%), absence of image artifacts (94.1%), mammography-specific developers availability (88.2%). On the other hand, relevant parameters were below 50% conformity, namely: monthly image quality control testing (28.8%) and high contrast details with respect to microcalcifications visualization (47.1%).Conclusion:The analysis revealed critical situations in terms of compliance with the health surveillance standards. Priority should be given to those mammography apparatuses that remained non-compliant at the second inspection performed within the one-year interval.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. METHODS: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. RESULTS: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). CONCLUSION: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. ADVANCES IN KNOWLEDGE: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forensic intelligence has recently gathered increasing attention as a potential expansion of forensic science that may contribute in a wider policing and security context. Whilst the new avenue is certainly promising, relatively few attempts to incorporate models, methods and techniques into practical projects are reported. This work reports a practical application of a generalised and transversal framework for developing forensic intelligence processes referred to here as the Transversal model adapted from previous work. Visual features present in the images of four datasets of false identity documents were systematically profiled and compared using image processing for the detection of a series of modus operandi (M.O.) actions. The nature of these series and their relation to the notion of common source was evaluated with respect to alternative known information and inferences drawn regarding respective crime systems. 439 documents seized by police and border guard authorities across 10 jurisdictions in Switzerland with known and unknown source level links formed the datasets for this study. Training sets were developed based on both known source level data, and visually supported relationships. Performance was evaluated through the use of intra-variability and inter-variability scores drawn from over 48,000 comparisons. The optimised method exhibited significant sensitivity combined with strong specificity and demonstrates its ability to support forensic intelligence efforts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Postprint (published version)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le cinéma des premiers temps, c'est-à-dire la production des deux premières décennies du cinéma, majoritairement caractérisée par des plans autonomes, des histoires courtes et un cadre fixe, n'a essentiellement connu d'études esthétiques que sous un angle narratologique, centrées notamment sur les prémisses du montage. Cette thèse déplace le regard - ou plus simplement le convoque -, en proposant de faire sa place à l'image. Car à qui sait les regarder, les premiers films dévoilent une parenté picturale jusqu'alors ignorée. Les images du cinéma des premiers temps - alors significativement appelées « tableaux » - se sont en effet définies à l'aune de la peinture, et même plus précisément par une imitation littérale des oeuvres d'art. Cette étude révèle que le tableau vivant, défini dans les termes stricts de la reconstitution d'une composition picturale par des acteurs vivants (que ceux-ci tiennent la pose ou non), est au fondement d'une esthétique du film des premiers temps. L'argument est structuré par les illustrations que l'auteure exhume (et compare, à la manière d'un spectaculaire et vivant jeu des 7 différences) parmi cette production filmique majoritairement disparue, brûlée, effacée, et ces références picturales aujourd'hui perdues, dénigrées, oubliées... Néanmoins ce ne sont pas quelques exemples isolés, mais un vrai phénomène historique qui est mis au jours à travers un corpus de films traversant tous les genres du cinéma des premiers temps, et prouvant que les productions du Film d'Art et des séries d'art ou le film Corner in Wheat (D.W. Griffith, 1909), souvent tenus comme un commencement, consistent bien plus en un aboutissement de cette tradition qui consiste à créer des images filmiques sous forme de tableaux vivants. Traçant d'abord ses « contexte et contours », le texte montre que la reconstitution picturale hante toutes les formes de spectacle à l'heure de l'émergence du cinéma. Les scènes de l'époque cultivent internationalement une esthétique de tableau vivant. Et la scène n'a pas l'exclusivité du phénomène : le médium photographique, dès son apparition, s'approprie le procédé, pour (chose jusqu'alors impossible) documenter l'effet visuel de ces reconstitutions, mais aussi pour les réinventer, en particulier pour se légitimer en tant que moyen artistique capable de rivaliser avec la peinture. Le cinéma émergent procède à une appropriation similaire du tableau vivant, qui fait le coeur de ce travail en y étant analysée selon quatre axes théoriques : Reproduire - où l'on découvre le caractère fondamentalement indirect de la filiation picturale de ces tableaux vivants, pris dans une dynamique de reproduction intermédiale qui en fait de véritables exercices de style, par lesquels les producteurs expérimentent et prennent conscience des moyens .artistiques de l'image filmique - ; Réincarner - où l'on étudie les problématiques engagées par la « mise en vie », et plus précisément la « mise en corps » des figures picturales (en particulier de Jésus et du nu), impliquant des enjeux de censure et un questionnement du regard sur l'art, sur le corps, et sur le statut de ces images qui semblent plus originales que l'original - ; Réanimer - où l'on examine la manière dont le cinéma mouvemente la peinture, en remettant la composition en action, en en redéployant l'instant prégnant, en expérimentant la pose gestuelle, l'arrêt du photogramme et tout le spectre de la temporalité cinématographique - ; enfin Recadrer - où l'on analyse le cadrage de ces tableaux repensés à l'aune de la caméra et de l'écran, qui nécessitent de complexifier les catégories théoriques baziniennes, et qui font émerger le tableau vivant comme un lieu de cristallisation d'une image filmique tabu/aire, offrant une résistance au montage linéaire. Or cette résistance se vérifiera jusque dans les films très contemporains, qui, en réactualisant le motif du tableau vivant, briseront la linéarité narrative du montage et feront rejaillir le poids artistique de l'image - ravivant en cela une esthétique fondatrice du cinéma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Market orientation is the organizational culture that creates the necessary behaviors for continuous additional value for customers and thus continuous superior performance for the business. The field of market orientation has been studied repeatedly during the past two decades. Yet research has concentrated on large firms in large domestic markets creating a need for diversifying research. The master’s thesis at hand examined the general incidence of market orientation among SMEs from five different industries as well as its consequences on SME performance. The empirical part of the thesis was conducted with a web-based survey that resulted in 255 responses. The data of the survey was analyzed by statistical analysis. The incidence of market orientation varied among dimensions and market orientation did not show any direct effect on firm performance. Customer orientation was the only dimension that showed a direct (positive) effect. On the contrary, moderating effects were found which indicate that the effect of market orientation in SMEs is influenced by other factors that should receive further attention. Also industry specific differences were discovered and should be further examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coating and filler pigments have strong influence to the properties of the paper. Filler content can be even over 30 % and pigment content in coating is about 85-95 weight percent. The physical and chemical properties of the pigments are different and the knowledge of these properties is important for optimising of optical and printing properties of the paper. The size and shape of pigment particles can be measured by different analysers which can be based on sedimentation, laser diffraction, changes in electric field etc. In this master's thesis was researched particle properties especially by scanning electron microscope (SEM) and image analysis programs. Research included nine pigments with different particle size and shape. Pigments were analysed by two image analysis programs (INCA Feature and Poikki), Coulter LS230 (laser diffraction) and SediGraph 5100 (sedimentation). The results were compared to perceive the effect of particle shape to the performance of the analysers. Only image analysis programs gave parameters of the particle shape. One part of research was also the sample preparation for SEM. Individual particles should be separated and distinct in ideal sample. Analysing methods gave different results but results from image analysis programs corresponded even to sedimentation or to laser diffraction depending on the particle shape. Detailed analysis of the particle shape required high magnification in SEM, but measured parameters described very well the shape of the particles. Large particles (ecd~1 µm) could be used also in 3D-modelling which enabled the measurement of the thickness of the particles. Scanning electron microscope and image analysis programs were effective and multifunctional tools for particle analyses. Development and experience will devise the usability of analysing method in routine use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multispectral images are becoming more common in the field of remote sensing, computer vision, and industrial applications. Due to the high accuracy of the multispectral information, it can be used as an important quality factor in the inspection of industrial products. Recently, the development on multispectral imaging systems and the computational analysis on the multispectral images have been the focus of a growing interest. In this thesis, three areas of multispectral image analysis are considered. First, a method for analyzing multispectral textured images was developed. The method is based on a spectral cooccurrence matrix, which contains information of the joint distribution of spectral classes in a spectral domain. Next, a procedure for estimating the illumination spectrum of the color images was developed. Proposed method can be used, for example, in color constancy, color correction, and in the content based search from color image databases. Finally, color filters for the optical pattern recognition were designed, and a prototype of a spectral vision system was constructed. The spectral vision system can be used to acquire a low dimensional component image set for the two dimensional spectral image reconstruction. The data obtained by the spectral vision system is small and therefore convenient for storing and transmitting a spectral image.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robotic platforms have advanced greatly in terms of their remote sensing capabilities, including obtaining optical information using cameras. Alongside these advances, visual mapping has become a very active research area, which facilitates the mapping of areas inaccessible to humans. This requires the efficient processing of data to increase the final mosaic quality and computational efficiency. In this paper, we propose an efficient image mosaicing algorithm for large area visual mapping in underwater environments using multiple underwater robots. Our method identifies overlapping image pairs in the trajectories carried out by the different robots during the topology estimation process, being this a cornerstone for efficiently mapping large areas of the seafloor. We present comparative results based on challenging real underwater datasets, which simulated multi-robot mapping

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quickremovalofbiosolidsinaquaculturefacilities,andspeciallyinrecirculatingaquaculturesystems(RAS),isoneofthemostimportantstepinwastemanagement.Sedimentationdynamicsofbiosolidsinanaquaculturetankwilldeterminetheiraccumulationatthebottomofthetank.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.