970 resultados para video quality assessment
Resumo:
In this paper, a new reconfigurable multi-standard architecture is introduced for integer-pixel motion estimation and a standard-cell based chip design study is presented. This has been designed to cover most of the common block-based video compression standards, including MPEG-2, MPEG-4, H.263, H.264, AVS and WMV-9. The architecture exhibits simpler control, high throughput and relative low hardware cost and highly competitive when compared with excising designs for specific video standards. It can also, through the use of control signals, be dynamically reconfigured at run-time to accommodate different system constraint such as the trade-off in power dissipation and video-quality. The computational rates achieved make the circuit suitable for high end video processing applications. Silicon design studies indicate that circuits based on this approach incur only a relatively small penalty in terms of power dissipation and silicon area when compared with implementations for specific standards.
Resumo:
This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.
Resumo:
Roadside safety barriers designs are tested with passenger cars in Europe using standard EN1317 in which the impact angle for normal, high and very high containment level tests is 20°. In comparison to EN1317, the US standard MASH has higher impact angles for cars and pickups (25°) and different vehicle masses. Studies in Europe (RISER) and the US have shown values for the 90th percentile impact angle of 30°–34°. Thus, the limited evidence available suggests that the 20° angle applied in EN 1317 may be too low.
The first goal of this paper is to use the US NCHRP database (Project NCHRP 17–22) to assess the distribution of impact angle and collision speed in recent ROR accidents. Second, based on the findings of the statistical analysis and on analysis of impact angles and speeds in the literature, an LS-DYNA finite element analysis was carried out to evaluate the normal containment level of concrete barriers in non-standard collisions. The FE model was validated against a crash test of a portable concrete barrier carried out at the UK Transport Research Laboratory (TRL).
The accident data analysis for run-off road accidents indicates that a substantial proportion of accidents have an impact angle in excess of 20°. The baseline LS-DYNA model showed good comparison with experimental acceleration severity index (ASI) data and the parametric analysis indicates a very significant influence of impact angle on ASI. Accordingly, a review of European run-off road accidents and the configuration of EN 1317 should be performed.
Resumo:
This paper presents a new rate-control algorithm for live video streaming over wireless IP networks, which is based on selective frame discarding. In the proposed mechanism excess 'P' frames are dropped from the output queue at the sender using a congestion estimate based on packet loss statistics obtained from RTCP feedback and from the Data Link (DL) layer. The performance of the algorithm is evaluated through computer simulation. This paper also presents a characterisation of packet losses owing to transmission errors and congestion, which can help in choosing appropriate strategies to maximise the video quality experienced by the end user. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
The number of software applications available on the Internet for distributing video streams in real time over P2P networks has grown quickly in the last two years. Typical this kind of distribution is made by television channel broadcasters which try to make their content globally available, using viewer's resources to support a large scale distribution of video without incurring in incremental costs. However, the lack of adaptation in video quality, combined with the lack of a standard protocol for this kind of multimedia distribution has driven content providers to basically ignore it as a solution for video delivery over the Internet. While the scalable extension of the H. 264 encoding (H.264/SVC) can be used to support terminal and network heterogeneity, it is not clear how it can be integrated in a P2P overlay to form a large scale and real time distribution. In this paper, we start by defining a solution that combines the most popular P2P file-sharing protocol, the BitTorrent, with the H. 264/SVC encoding for a real-time video content delivery. Using this solution we then evaluate the effect of several parameters in the quality received by peers.
Resumo:
What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies.
Resumo:
Lors de ces dix dernières années, le coût de la maintenance des systèmes orientés objets s'est accru jusqu' à compter pour plus de 70% du coût total des systèmes. Cette situation est due à plusieurs facteurs, parmi lesquels les plus importants sont: l'imprécision des spécifications des utilisateurs, l'environnement d'exécution changeant rapidement et la mauvaise qualité interne des systèmes. Parmi tous ces facteurs, le seul sur lequel nous ayons un réel contrôle est la qualité interne des systèmes. De nombreux modèles de qualité ont été proposés dans la littérature pour contribuer à contrôler la qualité. Cependant, la plupart de ces modèles utilisent des métriques de classes (nombre de méthodes d'une classe par exemple) ou des métriques de relations entre classes (couplage entre deux classes par exemple) pour mesurer les attributs internes des systèmes. Pourtant, la qualité des systèmes par objets ne dépend pas uniquement de la structure de leurs classes et que mesurent les métriques, mais aussi de la façon dont celles-ci sont organisées, c'est-à-dire de leur conception, qui se manifeste généralement à travers les patrons de conception et les anti-patrons. Dans cette thèse nous proposons la méthode DEQUALITE, qui permet de construire systématiquement des modèles de qualité prenant en compte non seulement les attributs internes des systèmes (grâce aux métriques), mais aussi leur conception (grâce aux patrons de conception et anti-patrons). Cette méthode utilise une approche par apprentissage basée sur les réseaux bayésiens et s'appuie sur les résultats d'une série d'expériences portant sur l'évaluation de l'impact des patrons de conception et des anti-patrons sur la qualité des systèmes. Ces expériences réalisées sur 9 grands systèmes libres orientés objet nous permettent de formuler les conclusions suivantes: • Contre l'intuition, les patrons de conception n'améliorent pas toujours la qualité des systèmes; les implantations très couplées de patrons de conception par exemple affectent la structure des classes et ont un impact négatif sur leur propension aux changements et aux fautes. • Les classes participantes dans des anti-atrons sont beaucoup plus susceptibles de changer et d'être impliquées dans des corrections de fautes que les autres classes d'un système. • Un pourcentage non négligeable de classes sont impliquées simultanément dans des patrons de conception et dans des anti-patrons. Les patrons de conception ont un effet positif en ce sens qu'ils atténuent les anti-patrons. Nous appliquons et validons notre méthode sur trois systèmes libres orientés objet afin de démontrer l'apport de la conception des systèmes dans l'évaluation de la qualité.
Resumo:
One of the objectives of the current investigation was to evaluate the effectiveness of Spirodela polyrhiza to remove heavy metals and other contaminants from the water samples collected from wetland sites of Eloor and Kannamaly under controlled conditions .The results obtained from the current study suggest that the test material S. polyrrhiza should be used in the biomonitoring and phytoremediation of municipal, agricultural and industrial effluents because of their simplicity, sensitivity and cost-effectiveness. The study throws light on the potential of this plant which can be used as an assessment tool in two diverse wetland in Ernakulum district. The results show the usefulness of combining physicochemical analysis with bioassays as such approach ensures better understanding of the toxicity of chemical pollutants and their influence on plant health. The results shows the suitability of Spirodela plant for surface water quality assessment as all selected parameters showed consistency with respect to water samples collected over a 3-monitoring periods. Similarly the relationship between the change in exposure period (2, 4 and 8 days) with the parameters were also studied in detail. Spirodela are consistent test material as they are homogeneous plant material; due to predominantly vegetative reproduction. New fronds are formed by clonal propagation thus, producing a population of genetically homogeneous plants. The result is small variability between treated individuals. It has been observed that phytoremediation of water samples collected from Eloor and Kannamaly using the floating plant system is a predominant method which is economic to construct, requires little maintenance and eco friendly.
Resumo:
Quality management Self-evaluation of the organisation Citizens/customers satisfaction Impact on society evaluation Key performance evaluation Good practices comparison (Benchmarking) Continuous improvement In professional environments, when quality assessment of museums is discussed, one immediately thinks of the honourableness of the directors and curators, the erudition and specialisation of knowledge, the diversity of the gathered material and study of the collections, the collections conservation methods and environmental control, the regularity and notoriety of the exhibitions and artists, the building’s architecture and site, the recreation of environments, the museographic equipment design. We admit that the roles and attributes listed above can contribute to the definition of a specificity of museological good practice within a hierarchised functional perspective (the museum functions) and for the classification of museums according to a scale, validated between peers, based on “installed” appreciation criteria, enforced from above downwards, according to the “prestige” of the products and of those who conceive them, but that say nothing about the effective satisfaction of the citizen/customers and the real impact on society. There is a lack of evaluation instruments that would give us a return of all that the museum is and represents in contemporary society, focused on being and on the relation with the other, in detriment of the ostentatious possession and of the doing in order to meet one’s duties. But it is only possible to evaluate something by measurement and comparison, on the basis of well defined criteria, from a common grid, implicating all of the actors in the self-evaluation, in the definition of the aims to fulfil and in the obtaining of results.
Resumo:
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0-an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/.
Resumo:
Groundwater is an important resource in the UK, with 45% of public water supplies in the Thames Water region derived from subterranean sources. In urban areas, groundwater has been affected by onthropogenic activities over 0 long period of time and from a multitude of sources, At present, groundwater quality is assessed using a range of chemical species to determine the extent of contamination. However, analysing a complex mixture of chemicals is time-consuming and expensive, whereas the use of an ecotoxicity test provides information on (a) the degree of pollution present in the groundwater and (b) the potential effect of that pollution. Microtox (TM), Eclox (TM) and Daphnia magna microtests were used in conjunction with standard chemical protocols to assess the contamination of groundwaters from sites throughout the London Borough of Hounslow and nearby Heathrow Airport. Because of their precision, range of responses and ease of use, Daphnia magna and Microfox (TM) tests are the bioassays that appear to be most effective for assessing groundwater toxicity However, neither test is ideal because it is also essential to monitor water hardness. Eclox (TM) does not appear to be suitable for use in groundwater-quality assessment in this area, because it is adversely affected by high total dissolved solids and electrical conductivity.
Resumo:
Distributed multimedia supports a symbiotic infotainment duality, i.e. the ability to transfer information to the user, yet also provide the user with a level of satisfaction. As multimedia is ultimately produced for the education and / or enjoyment of viewers, the user’s-perspective concerning the presentation quality is surely of equal importance as objective Quality of Service (QoS) technical parameters, to defining distributed multimedia quality. In order to extensively measure the user-perspective of multimedia video quality, we introduce an extended model of distributed multimedia quality that segregates quality into three discrete levels: the network-level, the media-level and content-level, using two distinct quality perspectives: the user-perspective and the technical-perspective. Since experimental questionnaires do not provide continuous monitoring of user attention, eye tracking was used in our study in order to provide a better understanding of the role that the human element plays in the reception, analysis and synthesis of multimedia data. Results showed that video content adaptation, results in disparity in user video eye-paths when: i) no single / obvious point of focus exists; or ii) when the point of attention changes dramatically. Accordingly, appropriate technical- and user-perspective parameter adaptation is implemented, for all quality abstractions of our model, i.e. network-level (via simulated delay and jitter), media-level (via a technical- and user-perspective manipulated region-of-interest attentive display) and content-level (via display-type and video clip-type). Our work has shown that user perception of distributed multimedia quality cannot be achieved by means of purely technical-perspective QoS parameter adaptation.
Resumo:
Model quality assessment programs (MQAPs) aim to assess the quality of modelled 3D protein structures. The provision of quality scores, describing both global and local (per-residue) accuracy are extremely important, as without quality scores we are unable to determine the usefulness of a 3D model for further computational and experimental wet lab studies.Here, we briefly discuss protein tertiary structure prediction, along with the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition and their key role in driving the field of protein model quality assessment methods (MQAPs). We also briefly discuss the top MQAPs from the previous CASP competitions. Additionally, we describe our downloadable and webserver-based model quality assessment methods: ModFOLD3, ModFOLDclust, ModFOLDclustQ, ModFOLDclust2, and IntFOLD-QA. We provide a practical step-by-step guide on using our downloadable and webserver-based tools and include examples of their application for improving tertiary structure prediction, ligand binding site residue prediction, and oligomer predictions.
Resumo:
The pervasive and ubiquitous computing has motivated researches on multimedia adaptation which aims at matching the video quality to the user needs and device restrictions. This technique has a high computational cost which needs to be studied and estimated when designing architectures and applications. This paper presents an analytical model to quantify these video transcoding costs in a hardware independent way. The model was used to analyze the impact of transcoding delays in end-to-end live-video transmissions over LANs, MANs and WANs. Experiments confirm that the proposed model helps to define the best transcoding architecture for different scenarios.
Resumo:
The purpose of this paper is to show by which means quality in on-line education is achieved at Dalarna University. As a leading provider of online university courses in northern Europe, both in terms of number of students conducting their studies entirely on-line compared to the whole student body, (approximately 70% on-line students all subjects included), Dalarna University has acquired de facto extensive practical experience in the field of information technologies related to distance education. It has been deemed essential, to ensure that the quality of teaching reflects the principles governing the assessment of learning so that on-line education is deemed as comparative to campus education, both from a legal and cognitive point-of-view. Dalarna University began on-line courses in 2002 and it soon became clear that the interaction between the teacher and the student should make its mark in all stages of the learning process in order to both maintain the learners' motivation and ensure the assimilation of knowledge. We will illustrate these aspects by giving examples of what has been done in the recent years in on-line teaching of languages. As this method of teaching is not limited to learning basic language skills, but also to the study of literature, social issues and the language system of the various cultures, our presentation will offer a broad range of areas where the principles of quality in education are provided on a daily basis.