985 resultados para Evaluation metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

High Dynamic Range (HDR) imaging was used to collect luminance information at workstations in 2 open-plan office buildings in Queensland, Australia: one lit by skylights, vertical windows and electric light, and another by skylights and electric light. This paper compares illuminance and luminance data collected in these offices with occupant feedback to evaluate these open-plan environments based on available and emerging metrics for visual comfort and glare. This study highlights issues of daylighting quality and measurement specific to open plan spaces. The results demonstrate that overhead glare is a serious threat to user acceptance of skylights, and that electric and daylight integration and controls have a major impact on the perception of daylighting quality. With regards to measurement of visual comfort it was found that the Daylight Glare Probability (DGP) gave poor agreement with occupant reports of discomfort glare in open-plan spaces with skylights, and the CIE Glare Index (CGI) gave the best agreement. Horizontal and vertical illuminances gave no indication of visual comfort in these spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the systematic investigation of Twitter as a communications platform continues, the question of developing reliable comparative metrics for the evaluation of public, communicative phenomena on Twitter becomes paramount. What is necessary here is the establishment of an accepted standard for the quantitative description of user activities on Twitter. This needs to be flexible enough in order to be applied to a wide range of communicative situations, such as the evaluation of individual users’ and groups of users’ Twitter communication strategies, the examination of communicative patterns within hashtags and other identifiable ad hoc publics on Twitter (Bruns & Burgess, 2011), and even the analysis of very large datasets of everyday interactions on the platform. By providing a framework for quantitative analysis on Twitter communication, researchers in different areas (e.g., communication studies, sociology, information systems) are enabled to adapt methodological approaches and to conduct analyses on their own. Besides general findings about communication structure on Twitter, large amounts of data might be used to better understand issues or events retrospectively, detect issues or events in an early stage, or even to predict certain real-world developments (e.g., election results; cf. Tumasjan, Sprenger, Sandner, & Welpe, 2010, for an early attempt to do so).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across 5 centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity-modulated radiotherapy (IMRT) and 47 treated with volumetric-modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organ-at-risk sparing, through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each organ-at-risk. Statistical significance was evaluated using two-tailed Welch’s T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the organs-at-risk: with increased compliance with recommended organ-at-risk dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To formally evaluate the written discharge advice for people with mild traumatic brain injury (mTBI). Methods: Eleven publications met the inclusion criteria: (1) intended for adults; (2) ≤two A4 pages; (3) published in English; (4) freely accessible; and (5) currently used (or suitable for use) in Australian hospital emergency departments or similar settings. Two independent raters evaluated the content and style of each publication against established standards. The readability of the publication, the diagnostic term(s) contained in it and a modified Patient Literature Usefulness Index (mPLUI) were also evaluated. Results: The mean content score was 19.18 ± 8.53 (maximum = 31) and the mean style score was 6.8 ± 1.34 (maximum = 8). The mean Flesch-Kincaid reading ease score was 66.42 ± 4.3. The mean mPLUI score was 65.86 ± 14.97 (maximum = 100). Higher scores on these metrics indicate more desirable properties. Over 80% of the publications used mixed diagnostic terminology. One publication scored optimally on two of the four metrics and highly on the others. Discussion: The content, style, readability and usefulness of written mTBI discharge advice was highly variable. The provision of written information to patients with mTBI is advised, but this variability in materials highlights the need for evaluation before distribution. Areas are identified to guide the improvement of written mTBI discharge advice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this research was to develop a set of reliable, valid preparedness metrics, built around a comprehensive framework for assessing hospital preparedness. This research used a combination of qualitative and quantitative methods which included interview and a Delphi study as well as a survey of hospitals in the Sichuan Province of China. The resultant framework is constructed around the stages of disaster management and includes nine key elements. Factor Analysis identified four contributing factors. The comparison of hospitals' preparedness using these four factors, revealed that tertiary-grade, teaching and general hospitals performed better than secondary-grade, non-teaching and non-general hospitals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web threats are becoming a major issue for both governments and companies. Generally, web threats increased as much as 600% during last year (WebSense, 2013). This appears to be a significant issue, since many major businesses seem to provide these services. Denial of Service (DoS) attacks are one of the most significant web threats and generally their aim is to waste the resources of the target machine (Mirkovic & Reiher, 2004). Dis-tributed Denial of Service (DDoS) attacks are typically executed from many sources and can result in large traf-fic flows. During last year 11% of DDoS attacks were over 60 Gbps (Prolexic, 2013a). The DDoS attacks are usually performed from the large botnets, which are networks of remotely controlled computers. There is an increasing effort by governments and companies to shut down the botnets (Dittrich, 2012), which has lead the attackers to look for alternative DDoS attack methods. One of the techniques to which attackers are returning to is DDoS amplification attacks. Amplification attacks use intermediate devices called amplifiers in order to amplify the attacker's traffic. This work outlines an evaluation tool and evaluates an amplification attack based on the Trivial File Transfer Proto-col (TFTP). This attack could have amplification factor of approximately 60, which rates highly alongside other researched amplification attacks. This could be a substantial issue globally, due to the fact this protocol is used in approximately 599,600 publicly open TFTP servers. Mitigation methods to this threat have also been consid-ered and a variety of countermeasures are proposed. Effects of this attack on both amplifier and target were analysed based on the proposed metrics. While it has been reported that the breaching of TFTP would be possible (Schultz, 2013), this paper provides a complete methodology for the setup of the attack, and its verification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-Power and Lossy-Network (LLN) are usually composed of static nodes, but the increase demand for mobility in mobile robotic and dynamic environment raises the question how a routing protocol for low-power and lossy-networks such as (RPL) would perform if a mobile sink is deployed. In this paper we investigate and evaluate the behaviour of the RPL protocol in fixed and mobile sink environments with respect to different network metrics such as latency, packet delivery ratio (PDR) and energy consumption. Extensive simulation using instant Contiki simulator show significant performance differences between fixed and mobile sink environments. Fixed sink LLNs performed better in terms of average power consumption, latency and packet delivery ratio. The results demonstrated also that RPL protocol is sensitive to mobility and it increases the number of isolated nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper metrics for assessing the performance of directional modulation (DM) physical-layer secure wireless systems are discussed. In the paper DM systems are shown to be categorized as static or dynamic. The behavior of each type of system is discussed for QPSK modulation. Besides EVM-like and BER metrics, secrecy rate as used in information theory community is also derived for the purpose of this QPSK DM system evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of learning from imbalanced data is of critical importance in a large number of application domains and can be a bottleneck in the performance of various conventional learning methods that assume the data distribution to be balanced. The class imbalance problem corresponds to dealing with the situation where one class massively outnumbers the other. The imbalance between majority and minority would lead machine learning to be biased and produce unreliable outcomes if the imbalanced data is used directly. There has been increasing interest in this research area and a number of algorithms have been developed. However, independent evaluation of the algorithms is limited. This paper aims at evaluating the performance of five representative data sampling methods namely SMOTE, ADASYN, BorderlineSMOTE, SMOTETomek and RUSBoost that deal with class imbalance problems. A comparative study is conducted and the performance of each method is critically analysed in terms of assessment metrics. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We write to comment on the recently published paper “Defining phytoplankton class boundaries in Portuguese transitional waters: an evaluation of the ecological quality status according to the Water Framework Directive” (Brito et al., 2012). This paper presents an integrated methodology to analyse the ecological quality status of several Portuguese transitional waters, using phytoplanktonrelated metrics. One of the systems analysed, the Guadiana estuary in southern Portugal, is considered the most problematic estuary, with its upstream water bodies classified as Poor in terms of ecological status. We strongly disagree with this conclusion and we would like to raise awareness to some methodological constraints that, in our opinion, are the basis of such deceptive conclusions and should therefore not be neglected when using phytoplankton to assess the ecological status of natural waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Container Loading Problem (CLP) literature has traditionally evaluated the dynamic stability of cargo by applying two metrics to box arrangements: the mean number of boxes supporting the items excluding those placed directly on the floor (M1) and the percentage of boxes with insufficient lateral support (M2). However, these metrics, that aim to be proxies for cargo stability during transportation, fail to translate real-world cargo conditions of dynamic stability. In this paper two new performance indicators are proposed to evaluate the dynamic stability of cargo arrangements: the number of fallen boxes (NFB) and the number of boxes within the Damage Boundary Curve fragility test (NB_DBC). Using 1500 solutions for well-known problem instances found in the literature, these new performance indicators are evaluated using a physics simulation tool (StableCargo), replacing the real-world transportation by a truck with a simulation of the dynamic behaviour of container loading arrangements. Two new dynamic stability metrics that can be integrated within any container loading algorithm are also proposed. The metrics are analytical models of the proposed stability performance indicators, computed by multiple linear regression. Pearson’s r correlation coefficient was used as an evaluation parameter for the performance of the models. The extensive computational results show that the proposed metrics are better proxies for dynamic stability in the CLP than the previous widely used metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the results of the crowd image analysis challenge, as part of the PETS 2009 workshop. The evaluation is carried out using a selection of the metrics available in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The evaluation highlights the strengths of the authors’ systems in areas such as precision, accuracy and robustness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the results of the crowd image analysis challenge of the Winter PETS 2009 workshop. The evaluation is carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium [13]. The evaluation highlights the detection and tracking performance of the authors’systems in areas such as precision, accuracy and robustness. The performance is also compared to the PETS 2009 submitted results.