921 resultados para Objective Image Quality
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
This thesis presents two graphical user interfaces for the project DigiQ - Fusion of Digital and Visual Print Quality, a project for computationally modeling the subjective human experience of print quality by measuring the image with certain metrics. After presenting the user interfaces, methods for reducing the computation time of several of the metrics and the image registration process required to compute the metrics, and details of their performance are given. The weighted sample method for the image registration process was able to signifigantly decrease the calculation times while resulting in some error. The random sampling method for the metrics greatly reduced calculation time while maintaining excellent accuracy, but worked with only two of the metrics.
Resumo:
The problem of understanding how humans perceive the quality of a reproduced image is of interest to researchers of many fields related to vision science and engineering: optics and material physics, image processing (compression and transfer), printing and media technology, and psychology. A measure for visual quality cannot be defined without ambiguity because it is ultimately the subjective opinion of an “end-user” observing the product. The purpose of this thesis is to devise computational methods to estimate the overall visual quality of prints, i.e. a numerical value that combines all the relevant attributes of the perceived image quality. The problem is limited to consider the perceived quality of printed photographs from the viewpoint of a consumer, and moreover, the study focuses only on digital printing methods, such as inkjet and electrophotography. The main contributions of this thesis are two novel methods to estimate the overall visual quality of prints. In the first method, the quality is computed as a visible difference between the reproduced image and the original digital (reference) image, which is assumed to have an ideal quality. The second method utilises instrumental print quality measures, such as colour densities, measured from printed technical test fields, and connects the instrumental measures to the overall quality via subjective attributes, i.e. attributes that directly contribute to the perceived quality, using a Bayesian network. Both approaches were evaluated and verified with real data, and shown to predict well the subjective evaluation results.
Resumo:
In this paper, an improved technique for evolving wavelet coefficients refined for compression and reconstruction of fingerprint images is presented. The FBI fingerprint compression standard [1, 2] uses the cdf 9/7 wavelet filter coefficients. Lifting scheme is an efficient way to represent classical wavelets with fewer filter coefficients [3, 4]. Here Genetic algorithm (GA) is used to evolve better lifting filter coefficients for cdf 9/7 wavelet to compress and reconstruct fingerprint images with better quality. Since the lifting filter coefficients are few in numbers compared to the corresponding classical wavelet filter coefficients, they are evolved at a faster rate using GA. A better reconstructed image quality in terms of Peak-Signal-to-Noise-Ratio (PSNR) is achieved with the best lifting filter coefficients evolved for a compression ratio 16:1. These evolved coefficients perform well for other compression ratios also.
Resumo:
This paper presents a new image data fusion scheme by combining median filtering with self-organizing feature map (SOFM) neural networks. The scheme consists of three steps: (1) pre-processing of the images, where weighted median filtering removes part of the noise components corrupting the image, (2) pixel clustering for each image using self-organizing feature map neural networks, and (3) fusion of the images obtained in Step (2), which suppresses the residual noise components and thus further improves the image quality. It proves that such a three-step combination offers an impressive effectiveness and performance improvement, which is confirmed by simulations involving three image sensors (each of which has a different noise structure).
Resumo:
Although the oral cavity is easily accessible to inspection, patients with oral cancer most often present at a late stage, leading to high morbidity and mortality. Autofluorescence imaging has emerged as a promising technology to aid clinicians in screening for oral neoplasia and as an aid to resection, but current approaches rely on subjective interpretation. We present a new method to objectively delineate neoplastic oral mucosa using autofluorescence imaging. Autofluorescence images were obtained from 56 patients with oral lesions and 11 normal volunteers. From these images, 276 measurements from 159 unique regions of interest (ROI) sites corresponding to normal and confirmed neoplastic areas were identified. Data from ROIs in the first 46 subjects were used to develop a simple classification algorithm based on the ratio of red-to-green fluorescence; performance of this algorithm was then validated using data from the ROIs in the last 21 subjects. This algorithm was applied to patient images to create visual disease probability maps across the field of view. Histologic sections of resected tissue were used to validate the disease probability maps. The best discrimination between neoplastic and nonneoplastic areas was obtained at 405 nm excitation; normal tissue could be discriminated from dysplasia and invasive cancer with a 95.9% sensitivity and 96.2% specificity in the training set, and with a 100% sensitivity and 91.4% specificity in the validation set. Disease probability maps qualitatively agreed with both clinical impression and histology. Autofluorescence imaging coupled with objective image analysis provided a sensitive and noninvasive tool for the detection of oral neoplasia.
Resumo:
O cenário atual das organizações revela importantes mudanças. A análise de seus resultados mercadológicos está sendo realizada de um modo bastante diferente de tempos atrás. Hoje a avaliação das empresas necessita ultrapassar seus resultados quantitativos. Com isso, as corporações buscam ferramentas para mensurar variáveis qualitativas: desejam saber de sua credibilidade, sua imagem e a percepção de seu posicionamento mercadológico. Neste contexto, o objetivo do presente estudo é identificar e analisar os formatos de avaliação que são utilizados pelas empresas para mensurar a efetividade dos resultados qualitativos da organização na perspectiva dos conceitos de reconhecimento, reputação, impressão, identidade empresarial, imagem, qualidade e satisfação. Na análise das evidências coletadas na empresa estudo de caso foi identificado que os processos do planejamento estratégico e a sua vinculação a um programa de qualidade, com seu respectivo processo de avaliação, são os principais parâmetros para mensurar os resultados qualitativos. Estes critérios e análises, juntamente com o controle e acompanhamento das metas e indicadores oriundos do planejamento estratégico, são os orientadores do processo de avaliação dos resultados desta empresa.
Resumo:
We present the construction of a homogeneous phantom to be used in simulating the scattering and absorption of X-rays by a standard patient chest and skull when irradiated laterally. This phantom consisted of Incite and aluminium plates with their thickness determined by a tomographic exploratory method applied to the anthropomorphic phantom. Using this phantom, an optimized radiographic technique was established for chest and skull of standard sized patient in lateral view. Images generated with this optimized technique demonstrated improved image quality and reduced radiation doses. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Objective: To assess the influence of anatomical location on computed tomography (CT) numbers in mid- and full field of view (FOV) cone beam computed tomography (CBCT) scans. Study Design: Polypropylene tubes with varying concentrations of dipotassium hydrogen phosphate (K2HPO4) solutions (50-1200 mg/mL) were imaged within the incisor, premolar, and molar dental sockets of a human skull phantom. CBCT scans were acquired using the NewTom 3G and NewTom 5G units. The CT numbers of the K2HPO 4 phantoms were measured, and the relationship between CT numbers and K2HPO4 concentration was examined. The measured CT numbers of the K2HPO4 phantoms were compared between anatomical sites. Results: At all six anatomical locations, there was a strong linear relationship between CT numbers and K2HPO4 concentration (R 2 > 0.93). However, the absolute CT numbers varied considerably with the anatomical location. Conclusion: The relationship between CT numbers and object density is not uniform through the dental arch on CBCT scans. © 2013 Elsevier Inc.
Resumo:
Pós-graduação em Biologia Geral e Aplicada - IBB
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Paediatric diagnostic radiology can be considered as a separate specialty and with distinct characteristics of the radiology applied in adult patients. This in reason of the variability in the anatomical structures size and bigger sensitivity of tissues. The literature present in its majority methodologies for segmentation and tissue classification in adult patients, and works on tissue quantification are rare. This work had for objective the development of a biological tissue classifier and quantifier algorithm, from histograms, and that converts the quantified average thickness of these tissues for its respective simulator materials. The results will be used in the optimization process of paediatrics images, in future works, since these patients are frequently over exposed to the radiation in the repeated attempts of if getting considered good quality radiographic images. The developed algorithm was capable to read and store the name of all the archives, in the operational system, to filter artifacs, to count and quantify each biological tissues from the histogram of the examination, to obtain the biological tissues average thicknesses and to convert this value into its respective simulator material. The results show that it is possible to distinguish bone, soft, fat and pulmonary tissues from histograms of tomographic examinations of thorax. The quantification of the constituent materials of anthropomorphic phantom made by the algorithm, compared with the data of literature shows that the biggest difference was of 21,6% for bone. However, the literature shows that variations of up to 30% in bone thickness do not influence of significant form in the radiographic image quality. The average thicknesses of biological tissues, quantified for paediatrics patients, show that one phantom can simulate patients with distinct DAP ranges, since variations... (Complete abstract click electronic access below)
Resumo:
OBJECTIVE: To evaluate a comprehensive MRI protocol that investigates for cancer, vascular disease, and degenerative/inflammatory disease from the head to the pelvis in less than 40 minutes on a new generation 48-channel 3T system. MATERIALS AND METHODS: All MR studies were performed on a 48-channel 3T MR scanner. A 20-channel head/neck coil, two 18-channel body arrays, and a 32-channel spine array were employed. A total of 4 healthy individuals were studied. The designed protocol included a combination of single-shot T2-weighted sequences, T1-weighted 3D gradient-echo pre- and post-gadolinium. All images were retrospectively evaluated by two radiologists independently for overall image quality. RESULTS: The image quality for cancer was rated as excellent in the liver, pancreas, kidneys, lungs, pelvic organs, and brain, and rated as fair in the colon and breast. For vascular diseases ratings were excellent in the aorta, major branch vessel origins, inferior vena cava, portal and hepatic veins, rated as good in pulmonary arteries, and as poor in the coronary arteries. For degenerative/inflammatory diseases ratings were excellent in the brain, liver and pancreas. The inter-observer agreement was excellent. CONCLUSION: A comprehensive and time efficient screening for important categories of disease processes may be achieved with high quality imaging in a new generation 48-channel 3T system.