972 resultados para CT, Radiation Dose, Image Quality
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
With the recent increased popularity and high usage of HTTP Adaptive Streaming (HAS) techniques, various studies have been carried out in this area which generally focused on the technical enhancement of HAS technology and applications. However, a lack of common HAS standard led to multiple proprietary approaches which have been developed by major Internet companies. In the emerging MPEG-DASH standard the packagings of the video content and HTTP syntax have been standardized; but all the details of the adaptation behavior are left to the client implementation. Nevertheless, to design an adaptation algorithm which optimizes the viewing experience of the enduser, the multimedia service providers need to know about the Quality of Experience (QoE) of different adaptation schemes. Taking this into account, the objective of this experiment was to study the QoE of a HAS-based video broadcast model. The experiment has been carried out through a subjective study of the end user response to various possible clients’ behavior for changing the video quality taking different QoE-influence factors into account. The experimental conclusions have made a good insight into the QoE of different adaptation schemes which can be exploited by HAS clients for designing the adaptation algorithms.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
The involvement of the antioxidant enzymes superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase in radiobiological processes has been described at the enzyme activity level. We irradiated radiation-resistant (RR) and radiation-sensitive (RS) mice and studied antioxidant enzymes at the transcriptional and activity level. In addition, aromatic hydroxylation and lipid peroxidation parameters were determined to study radiation resistance at the oxidation level. RS BALB/c/J Him mice and RR C3H He/Him mice were whole-body-irradiated with x-rays at 2, 4, and 6 Gy and killed 5, 15, and 30 min after irradiation. mRNA was isolated from liver and hybridized with probes for antioxidant enzymes and β-actin as a housekeeping gene control. Antioxidant enzyme activities were determined by standard assays. Parameters for aromatic hydroxylation (o-tyrosine) and lipid peroxidation (malondialdehyde) were determined by HPLC methods. Antioxidant transcription was unchanged in contrast to antioxidant activities; SOD and CAT activities were elevated within 15 min in RR animals but not in RS mice, at all doses studied. Glutathione peroxidase activity was not different between RR and RS mice and was only moderately elevated after irradiation. No significant differences were found between RR and RS animals at the oxidation level, although a radiation dose-dependent increase of oxidation products was detected in both groups. We found that ionizing irradiation led to increased antioxidant activity only minutes after irradiation in the absence of increased transcription of these antioxidant enzymes. RR animals show higher antioxidant enzyme activities than do RS mice, but oxidation products are comparable in RS and RR mice. As unchanged transcription of antioxidant enzymes could not have been responsible for the increased antioxidant enzyme activities, preformed antioxidant enzymes should have been released by the irradiation process. This would be in agreement with previous studies of preformed, stored SOD. The finding of higher SOD and CAT activities in RR than in RS animals could point to a role for these antioxidant enzymes for the process of radiation sensitivity.
Resumo:
Les artéfacts métalliques entraînent un épaississement artéfactuel de la paroi des tuteurs en tomodensitométrie (TDM) avec réduction apparente de leur lumière. Cette étude transversale prospective, devis mesures répétées et observateurs avec méthode en aveugle, chez 24 patients consécutifs/71 tuteurs coronariens a pour objectif de comparer l’épaisseur de paroi des tuteurs en TDM après reconstruction par un algorithme avec renforcement des bords et un algorithme standard. Une angiographie coronarienne par TDM 256 coupes a été réalisée, avec reconstruction par algorithmes avec renforcement des bords et standard. L’épaisseur de paroi des tuteurs était mesurée par méthodes orthogonale (diamètres) et circonférentielle (circonférences). La qualité d’image des tuteurs était évaluée par échelle ordinale, et les données analysées par modèles linéaire mixte et régression logistique des cotes proportionnelles. L’épaisseur de paroi des tuteurs était inférieure avec l’algorithme avec renforcement des bords comparé à l’algorithme standard, avec les méthodes orthogonale (0,97±0,02 vs 1,09±0,03 mm, respectivement; p<0,001) et circonférentielle (1,13±0,02 vs 1,21±0,02 mm, respectivement; p<0,001). Le premier causait moins de surestimation par rapport à l’épaisseur nominale comparé au second, avec méthodes orthogonale (0,89±0,19 vs 1,00±0,26 mm, respectivement; p<0,001) et circonférentielle (1,06±0,26 vs 1,13±0,31 mm, respectivement; p=0,005) et diminuait de 6 % la surestimation. Les scores de qualité étaient meilleurs avec l’algorithme avec renforcement des bords (OR 3,71; IC 95% 2,33–5,92; p<0,001). En conclusion, la reconstruction des images avec l’algorithme avec renforcement des bords génère des parois de tuteurs plus minces, moins de surestimation, et de meilleurs scores de qualité d’image que l’algorithme standard.
Resumo:
Background: Remote access to pediatric cardiology diagnostic services is enabled by real-time transmission of echocardiographic images. Several transmission bandwidths have been used but there has been little analysis of image quality provided by different bandwidths. We designed a study of the quality of transmitted images at various bandwidths. Methods: Two echocardiographers viewed randomly a series of 13 recorded pediatric echocardiographic images either directly or after transmission using 1 of 4 bandwidths: 256; 384; 512; or 768 kbps. An image clarity scoring scale was used to assess image quality of cardiac structures. Results: Measurable differences were found in image quality with different transmission bandwidths; 512 kbps was the minimum for consistently clear imaging of all cardiac structures examined. Conclusion: Bandwidth greater than 512 kbps confers sharper images subjectively although this could not be quantified by our methods.
Resumo:
We have estimated of the maximum radiation dose received from consuming an oyster at Hiroshima following the A-bomb detonation in 1945
Resumo:
For several decades, a dose of 25 kGy of gamma irradiation has been recommended for terminal sterilization of medical products, including bone allografts. Practically, the application of a given gamma dose varies from tissue bank to tissue bank. While many banks use 25 kGy, some have adopted a higher dose, while some choose lower doses, and others do not use irradiation for terminal sterilization. A revolution in quality control in the tissue banking industry has occurred in line with development of quality assurance standards. These have resulted in significant reductions in the risk of contamination by microorganisms of final graft products. In light of these developments, there is sufficient rationale to re-establish a new standard dose, sufficient enough to sterilize allograft bone, while minimizing the adverse effects of gamma radiation on tissue properties. Using valid modifications, several authors have applied ISO standards to establish a radiation dose for bone allografts that is specific to systems employed in bone banking. These standards, and their verification, suggest that the actual dose could be significantly reduced from 25 kGy, while maintaining a valid sterility assurance level (SAL) of 10−6. The current paper reviews the methods that have been used to develop radiation doses for terminal sterilization of medical products, and the current trend for selection of a specific dose for tissue banks.
Resumo:
We have studied Co60 gamma-irradiation effect on the characteristics of Type IA fiber Bragg gratings. A record Bragg peak shift of 190 pm was observed for a grating written in Fibercore PS-1250/1500 photosensitive fiber at a radiation dose of 116 kGy. Type IA and Type I gratings show different kinetics under radiation and during post-radiation annealing, which can be used for the design of a grating based dosimetry system.
Resumo:
We have studied Co60 ionizing radiation effect on the characteristics of Type IA fiber Bragg gratings. A record Bragg peak shift of 190 pm was observed for a grating written in Fibercore PS-1250/1500 photosensitive fiber at a radiation dose of 116 kGy. Type IA and Type I gratings show different kinetics under radiation and during post-radiation annealing, which can be used for the design of a grating based dosimetry system.
Resumo:
The tear film, cornea and lens dictate the refractive power of the eye and the retinal image quality is principally defined by diffraction, whole eye wavefront error, scatter, and chromatic aberration. Diffraction and wave aberration are fundamentally pupil diameter dependent; however scatter can be induced by refractive surgery and in the normal ageing eye becomes an increasingly important factor defining retinal image quality. The component of visual quality most affected by the tear film, refractive surgery and multifocal contact and intraocular lenses is the wave aberration of the eye. This body of work demonstrates the effects of each of these anomalies on the visual quality of the eye. When assessing normal or borderline self-diagnosed dry eye subjects using aberrometry, combining lubricating eye drops and spray does not offer any benefit over individual products. However, subjects perceive a difference in comfort for all interventions after one hour. Total higher order aberrations increase after laser assisted sub-epithelial keratectomy performed using a solid-state laser on myopes, but this causes no significant decrease in contrast sensitivity or increase in glare disability. Mean sensitivity and reliability indices for perimetry were comparable to pre-surgery results. Multifocal contact lenses and intraocular lenses are designed to maximise vision when the patient is binocular, so any evaluation of the eyes individually is confounded by reduced individual visual acuity and visual quality. Different designs of aspheric multifocal contact lenses do not provide the same level of visual quality. Multifocal contact lenses adversely affect mean deviation values for perimetry and this should be considered when screening individuals with multifocal contact or intraocular lenses. Photographic image quality obtained through a multifocal contact or intraocular lens appears to be unchanged. Future work should evaluate the effect of these anomalies in combination; with the aim of providing the best visual quality possible and supplying normative data for screening purposes.
Resumo:
We have studied Co60 gamma-irradiation effect on the characteristics of Type IA fiber Bragg gratings. A record Bragg peak shift of 190 pm was observed for a grating written in Fibercore PS-1250/1500 photosensitive fiber at a radiation dose of 116 kGy. Type IA and Type I gratings show different kinetics under radiation and during post-radiation annealing, which can be used for the design of a grating based dosimetry system.
Resumo:
We have studied Co60 ionizing radiation effect on the characteristics of Type IA fiber Bragg gratings. A record Bragg peak shift of 190 pm was observed for a grating written in Fibercore PS-1250/1500 photosensitive fiber at a radiation dose of 116 kGy. Type IA and Type I gratings show different kinetics under radiation and during post-radiation annealing, which can be used for the design of a grating based dosimetry system.
Resumo:
Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.
Resumo:
JPEG2000 is a new coming image standard. In this paper we analyze the performance of error resilience tools in JPEG2000, and present an analytical model to estimate the quality of JPEG2000 encoded image transmitted over wireless channels. The effectiveness of the analytical model is validated by simulation results. Furthermore, analytical model is utilized by the base station to design efficient unequally error protection schemes for JPEG2000 transmission. In the design, a utility function is denned to make a tradeoff between the image quality and the cost for transmitting the image over wireless channel. © 2002 IEEE.