980 resultados para objective quality assessment
Resumo:
Bibliography: p. 71-73.
Resumo:
The target of no-reference (NR) image quality assessment (IQA) is to establish a computational model to predict the visual quality of an image. The existing prominent method is based on natural scene statistics (NSS). It uses the joint and marginal distributions of wavelet coefficients for IQA. However, this method is only applicable to JPEG2000 compressed images. Since the wavelet transform fails to capture the directional information of images, an improved NSS model is established by contourlets. In this paper, the contourlet transform is utilized to NSS of images, and then the relationship of contourlet coefficients is represented by the joint distribution. The statistics of contourlet coefficients are applicable to indicate variation of image quality. In addition, an image-dependent threshold is adopted to reduce the effect of content to the statistical model. Finally, image quality can be evaluated by combining the extracted features in each subband nonlinearly. Our algorithm is trained and tested on the LIVE database II. Experimental results demonstrate that the proposed algorithm is superior to the conventional NSS model and can be applied to different distortions. © 2009 Elsevier B.V. All rights reserved.
Resumo:
This work looks into video quality assessment applied to the field of telecare and proposes an alternative metric to the more traditionally used PSNR based on the requirements of such an application. We show that the Pause Intensity metric introduced in [1] is also relevant and applicable to heterogeneous networks with a wireless last hop connected to a wired TCP backbone. We demonstrate through our emulation testbed that the impairments experienced in such a network architecture are dominated by continuity based impairments rather than artifacts, such as motion drift or blockiness. We also look into the implication of using Pause Intensity as a metric in terms of the overall video latency, which is potentially problematic should the video be sent and acted upon in real-time. We conclude that Pause Intensity may be used alongside the video characteristics which have been suggested as a measure of the overall video quality. © 2012 IEEE.
Resumo:
The availability of BRAF inhibitors has given metastatic melanoma patients an effective new treatment choice and molecular testing to determine the presence or absence of a BRAF codon 600 mutation is pivotal in the clinical management of these patients. This molecular test must be performed accurately and appropriately to ensure that the patient receives the most suitable treatment in a timely manner. Laboratories have introduced such testing; however, some experience low sample throughput making it critical that an external quality assurance programme is available to help promote a high standard of testing, reporting and provide an educational aspect for BRAF molecular testing. Laboratories took part in three rounds of external quality assessment (EQA) during a 12-month period giving participants a measure of the accuracy of genotyping, clinical interpretation of the result and experience in testing a range of different samples. Formalin fixed paraffin embedded tissue sections from malignant melanoma patients were distributed to participants for BRAF molecular testing. The standard of testing was generally high but distribution of a mutation other than the most common, p.(Val600Glu), highlighted concerns with detection or reporting of the presence of rarer mutations. The main issues raised in the interpretation of the results were the importance of clear unambiguous interpretation of the result tailored to the patient and the understanding that the treatment is different from that given to other stratified medicine programmes. The variability in reporting and wide range of methodologies used indicate a continuing need for EQA in this field.
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
Sediment quality from Paranagua Estuarine System (PES), a highly important port and ecological zone, was evaluated by assessing three lines of evidence: (1) sediment physical-chemical characteristics; (2) sediment toxicity (elutriates, sediment-water interface, and whole sediment); and (3) benthic community structure. Results revealed a gradient of increasing degradation of sediments (i.e. higher concentrations of trace metals, higher toxicity, and impoverishment of benthic community structure) towards inner PES. Data integration by principal component analysis (PCA) showed positive correlation between some contaminants (mainly As, Cr, Ni, and Pb) and toxicity in samples collected from stations located in upper estuary and one station placed away from contamination sources. Benthic community structure seems to be affected by both pollution and natural fine characteristics of the sediments, which reinforces the importance of a weight-of-evidence approach to evaluate sediments of PES. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Context: Neonatal mortality rate is declining globally. The aim of the present study is to identify relevant indicators for assessing newborn care in hospitals by a systematic review. Evidence Acquisition: A search on electronic data base and manual searches of personal files for studies on quality indicators of newborn care were carried out. Searching 9 bibliographic databases, we found 85 articles of which 22 exactly related ones were selected and studied. Hand search yielded 1 record were also searched and 2 records were included. Results: A list of 87 structure, process and outcome indicators was formulated from the articles. Also 26 excess measures were identified in gray literature. After removing duplicates, and categorizing in 3 domains, 18 measures were input, 41 process and 34 outcome measures. Conclusions: These 93 indicators provide a framework for assessing how well the hospitals are providing neonatal care. These measures should be discussed in each context expert panels to address nationally applicable indices of neonatal care and may be adapted for local health settings.
Resumo:
The assessment of water quality has changed markedly worldwide over the last years, especially in Europe due to the implementation of the Water Framework Directive. Fish was considered a key-element in this context and several fish-based multi-metric indices have been proposed. In this study, we propose a multi-metric index, the Estuarine Fish Assessment Index (EFAI), developed for Portuguese estuaries, designed for the overall assessment of transitional waters, which could also be applied at the water body level within an estuary. The EFAI integrates seven metrics: species richness, percentage of marine migrants, number of species and abundance of estuarine resident species, number of species and abundance of piscivorous species, status of diadromous species, status of introduced species and status of disturbance sensitive species. Fish sampling surveys were conducted in 2006, 2009 and 2010, using beam trawl, in 13 estuarine systems along the Portuguese coast. Most of the metrics presented a high variability among the transitional systems surveyed. According to the EFAI values, Portuguese estuaries presented a "Good" water quality status (except the Douro in a particular year). The assessments in different years were generally concordant, with a few exceptions. The relationship between the EFAI and the Anthropogenic Pressure Index (API) was not significant, but a negative and significant correlation was registered between the EFAI and the expert judgement pressure index, at both estuary and water body level. The ordination analysis performed to evaluate similarities among North-East Atlantic Geographical Intercalibration Group (NEAGIG) fish-based indices put in evidence four main groups: the French index, since it is substantially different from all the other indices (uses only four metrics based on densities); indices from Ireland, United Kingdom and Spain (Asturias and Cantabria); the Dutch and German indices; and the indices of Belgium. Portugal and Spain (Basque country). The need for detailed studies, including comparative approaches, on several aspects of these assessment tools, especially in what regards their response to anthropogenic pressures was stressed. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Soil is a key resource that provides the basis of food production and sustains and delivers several ecosystems services including regulating and supporting services such as water and climate regulation, soil formation and the cycling of nutrients carbon and water. During the last decades, population growth, dietary changes and the subsequent pressure on food production, have caused severe damages on soil quality as a consequence of intensive, high input-based agriculture. While agriculture is supposed to maintain and steward its most important resource base, it compromises soil quality and fertility through its impact on erosion, soil organic matter and biodiversity decline, compaction, etc., and thus the necessary yield increases for the next decades. New or improved cropping systems and agricultural practices are needed to ensure a sustainable use of this resource and to fully take the advantages of its associated ecosystem services. Also, new and better soil quality indicators are crucial for fast and in-field soil diagnosis to help farmers decide on the best management practices to adopt under specific pedo-climatic conditions. Conservation Agriculture and its fundamental principles: minimum (or no) soil disturbance, permanent organic soil cover and crop rotation /intercropping certainly figure among the possibilities capable to guarantee sustainable soil management. The iSQAPER project – Interactive Soil Quality Assessment in Europe and China for Agricultural Productivity and Environmental Resilience – is tackling this problem with the development of a Soil Quality application (SQAPP) that links soil and agricultural management practices to soil quality indicators and will provide an easy-to-use tool for farmers and land managers to judge their soil status. The University of Évora is the leader of WP6 - Evaluating and demonstrating measures to improve Soil Quality. In this work package, several promising soil and agricultural management practices will be tested at selected sites and evaluated using the set of soil quality indicators defined for the SQAPP tool. The project as a whole and WP6 in specific can contribute to proof and demonstrate under different pedoclimatic conditions the impact of Conservation Agriculture practices on soil quality and function as was named the call under which this project was submitted.
Resumo:
OBJECTIVE: Quality assessment in consultation and liaison psychiatry (CLP) is extremely difficult and must take into account numerous factors. The general practitioner (GP) of the patients seen by CL psychiatrists seems an essential factor to be considered in evaluating CL work. However, as far as we know, no study is doing so. Therefore, we have implied the GP to assess our CL work at the hospital St-Loup-Orbe. METHOD: We put up a qualitative study consisting of semi-structured interviews with 18 GPs caring for 45 patients having been submitted to a psychiatric CL intervention. Furthermore, we invited the GPs to assesses CLP as a specialization as well as CLP practiced at St-Loup-Orbe hospital. RESULTS: Impact is judged by the GPs with regard to the total number as: highly favorable> favorable> indifferent> negative. The GPs' critiques, whether positive or negative, are highly informative. CONCLUSIONS: GPs accept favorably CLP interventions and consider them on the whole as constructive. On the other hand, they are not sufficiently considered as partners during their patients' hospital stay. Furthermore, CLP must evaluate its impact at distance from the consultation and take into account the GPs' assessments to improve CL quality.
Resumo:
The goal of this work is to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality. The method takes into account the dynamic range of the image detector, the detection of high and low contrast structures, the visualisation of the images and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualisation process in the image quality assessment. A mathematical model observer (non-prewhitening matched filter), that calculates the detectability of high and low contrast structures using spatial resolution, noise and contrast, was used to compare the two technologies. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. The method of using a test object with a large tissue composition range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimise the process of radiographic reading of soft copy images.
Resumo:
PURPOSE: Iterative algorithms introduce new challenges in the field of image quality assessment. The purpose of this study is to use a mathematical model to evaluate objectively the low contrast detectability in CT. MATERIALS AND METHODS: A QRM 401 phantom containing 5 and 8 mm diameter spheres with a contrast level of 10 and 20 HU was used. The images were acquired at 120 kV with CTDIvol equal to 5, 10, 15, 20 mGy and reconstructed using the filtered back-projection (FBP), adaptive statistical iterative reconstruction 50% (ASIR 50%) and model-based iterative reconstruction (MBIR) algorithms. The model observer used is the Channelized Hotelling Observer (CHO). The channels are dense difference of Gaussian channels (D-DOG). The CHO performances were compared to the outcomes of six human observers having performed four alternative forced choice (4-AFC) tests. RESULTS: For the same CTDIvol level and according to CHO model, the MBIR algorithm gives the higher detectability index. The outcomes of human observers and results of CHO are highly correlated whatever the dose levels, the signals considered and the algorithms used when some noise is added to the CHO model. The Pearson coefficient between the human observers and the CHO is 0.93 for FBP and 0.98 for MBIR. CONCLUSION: The human observers' performances can be predicted by the CHO model. This opens the way for proposing, in parallel to the standard dose report, the level of low contrast detectability expected. The introduction of iterative reconstruction requires such an approach to ensure that dose reduction does not impair diagnostics.
Resumo:
There is a wide range of telecommunications services that transmit voice, video and data through complex transmission networks and in some cases, the service has not an acceptable quality level for the end user. In this sense the study of methods for assessing video quality and voice have a very important role. This paper presents a classification scheme, based on different criteria, of the methods and metrics that are being studied in recent years. This paper presents how the video quality is affected by degradation in the transmission channel in two kinds of services: Digital TV (ISDB-TB) due the fading in the air interface and video streaming service on an IP network due packet loss. For Digital TV tests was set up a scenario where the digital TV transmitter is connected to an RF channel emulator, where are inserted different fading models and at the end, the videos are saved in a mobile device. The tests of streaming video were performed in an isolated scenario of IP network, which are scheduled several network conditions, resulting in different qualities of video reception. The video quality assessment is performed using objective assessment methods: PSNR, SSIM and VQM. The results show how the losses in the transmission channel affects the quality of end-user experience on both services studied.