971 resultados para Evaluation metrics
Resumo:
Measuring social and environmental metrics of property is necessary for meaningful triple bottom line (TBL) assessments. This paper demonstrates how relevant indicators derived from environmental rating systems provide for reasonably straightforward collations of performance scores that support adjustments based on a sliding scale. It also highlights the absence of a corresponding consensus of important social metrics representing the third leg of the TBL tripod. Assessing TBL may be unavoidably imprecise, but if valuers and managers continue to ignore TBL concerns, their assessments may soon be less relevant given the emerging institutional milieu informing and reflecting business practices and society expectations.
Resumo:
Presents a unified and systematic assessment of ten position control strategies for a hydraulic servo system with single-ended cylinder driven by a proportional directional control valve. We aim at identifying those methods that achieve better tracking, have a low sensitivity to system uncertainties, and offer a good balance between development effort and end results. A formal approach for solving this problem relies on several practical metrics, which is introduced herein. Their choice is important, as the comparison results between controllers can vary significantly, depending on the selected criterion. Apart from the quantitative assessment, we also raise aspects which are difficult to quantify, but which must stay in attention when considering the position control problem for this class of hydraulic servo systems.
Resumo:
Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost which demand of continuous improvements techniques. In this paper, we propose a fuzzy based performance evaluation method for lean supply chain. To understand the overall performance of cost competitive supply chain, we investigate the alignment of market strategy and position of the supply chain. Competitive strategies can be achieved by using a different weight calculation for different supply chain situations. By identifying optimal performance metrics and applying performance evaluation methods, managers can predict the overall supply chain performance under lean strategy.
Resumo:
IEEE 802.11p is the new standard for inter-vehicular communications (IVC) using the 5.9 GHz frequency band; it is planned to be widely deployed to enable cooperative systems. 802.11p uses and performance have been studied theoretically and in simulations over the past years. Unfortunately, many of these results have not been confirmed by on-tracks experimentation. In this paper, we describe field trials of 802.11p technology with our test vehicles. Metrics such as maximum range, latency and frame loss are examined.
Resumo:
Traffic safety studies mandate more than what existing micro-simulation models can offer as they postulate that every driver exhibits a safe behaviour. All the microscopic traffic simulation models are consisting of a car-following model and the Gazis–Herman–Rothery (GHR) car-following model is a widely used model. This paper highlights the limitations of the GHR car-following model capability to model longitudinal driving behaviour for safety study purposes. This study reviews and compares different version of the GHR model. To empower the GHR model on precise metrics reproduction a new set of car-following model parameters is offered to simulate unsafe vehicle conflicts. NGSIM vehicle trajectory data is used to evaluate the new model and short following headways and Time to Collision are employed to assess critical safety events within traffic flow. Risky events are extracted from available NGSIM data to evaluate the modified model against the generic versions of the GHR model. The results from simulation tests illustrate that the proposed model does predict the safety metrics better than the generic GHR model. Additionally it can potentially facilitate assessing and predicting traffic facilities’ safety using microscopic simulation. The new model can predict Near-miss rear-end crashes.
Resumo:
High Dynamic Range (HDR) imaging was used to collect luminance information at workstations in 2 open-plan office buildings in Queensland, Australia: one lit by skylights, vertical windows and electric light, and another by skylights and electric light. This paper compares illuminance and luminance data collected in these offices with occupant feedback to evaluate these open-plan environments based on available and emerging metrics for visual comfort and glare. This study highlights issues of daylighting quality and measurement specific to open plan spaces. The results demonstrate that overhead glare is a serious threat to user acceptance of skylights, and that electric and daylight integration and controls have a major impact on the perception of daylighting quality. With regards to measurement of visual comfort it was found that the Daylight Glare Probability (DGP) gave poor agreement with occupant reports of discomfort glare in open-plan spaces with skylights, and the CIE Glare Index (CGI) gave the best agreement. Horizontal and vertical illuminances gave no indication of visual comfort in these spaces.
Resumo:
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Resumo:
As the systematic investigation of Twitter as a communications platform continues, the question of developing reliable comparative metrics for the evaluation of public, communicative phenomena on Twitter becomes paramount. What is necessary here is the establishment of an accepted standard for the quantitative description of user activities on Twitter. This needs to be flexible enough in order to be applied to a wide range of communicative situations, such as the evaluation of individual users’ and groups of users’ Twitter communication strategies, the examination of communicative patterns within hashtags and other identifiable ad hoc publics on Twitter (Bruns & Burgess, 2011), and even the analysis of very large datasets of everyday interactions on the platform. By providing a framework for quantitative analysis on Twitter communication, researchers in different areas (e.g., communication studies, sociology, information systems) are enabled to adapt methodological approaches and to conduct analyses on their own. Besides general findings about communication structure on Twitter, large amounts of data might be used to better understand issues or events retrospectively, detect issues or events in an early stage, or even to predict certain real-world developments (e.g., election results; cf. Tumasjan, Sprenger, Sandner, & Welpe, 2010, for an early attempt to do so).
Resumo:
Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across 5 centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity-modulated radiotherapy (IMRT) and 47 treated with volumetric-modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organ-at-risk sparing, through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each organ-at-risk. Statistical significance was evaluated using two-tailed Welch’s T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the organs-at-risk: with increased compliance with recommended organ-at-risk dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.
Resumo:
Objective: To formally evaluate the written discharge advice for people with mild traumatic brain injury (mTBI). Methods: Eleven publications met the inclusion criteria: (1) intended for adults; (2) ≤two A4 pages; (3) published in English; (4) freely accessible; and (5) currently used (or suitable for use) in Australian hospital emergency departments or similar settings. Two independent raters evaluated the content and style of each publication against established standards. The readability of the publication, the diagnostic term(s) contained in it and a modified Patient Literature Usefulness Index (mPLUI) were also evaluated. Results: The mean content score was 19.18 ± 8.53 (maximum = 31) and the mean style score was 6.8 ± 1.34 (maximum = 8). The mean Flesch-Kincaid reading ease score was 66.42 ± 4.3. The mean mPLUI score was 65.86 ± 14.97 (maximum = 100). Higher scores on these metrics indicate more desirable properties. Over 80% of the publications used mixed diagnostic terminology. One publication scored optimally on two of the four metrics and highly on the others. Discussion: The content, style, readability and usefulness of written mTBI discharge advice was highly variable. The provision of written information to patients with mTBI is advised, but this variability in materials highlights the need for evaluation before distribution. Areas are identified to guide the improvement of written mTBI discharge advice.
Resumo:
The aim of this research was to develop a set of reliable, valid preparedness metrics, built around a comprehensive framework for assessing hospital preparedness. This research used a combination of qualitative and quantitative methods which included interview and a Delphi study as well as a survey of hospitals in the Sichuan Province of China. The resultant framework is constructed around the stages of disaster management and includes nine key elements. Factor Analysis identified four contributing factors. The comparison of hospitals' preparedness using these four factors, revealed that tertiary-grade, teaching and general hospitals performed better than secondary-grade, non-teaching and non-general hospitals.
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.
Resumo:
Web threats are becoming a major issue for both governments and companies. Generally, web threats increased as much as 600% during last year (WebSense, 2013). This appears to be a significant issue, since many major businesses seem to provide these services. Denial of Service (DoS) attacks are one of the most significant web threats and generally their aim is to waste the resources of the target machine (Mirkovic & Reiher, 2004). Dis-tributed Denial of Service (DDoS) attacks are typically executed from many sources and can result in large traf-fic flows. During last year 11% of DDoS attacks were over 60 Gbps (Prolexic, 2013a). The DDoS attacks are usually performed from the large botnets, which are networks of remotely controlled computers. There is an increasing effort by governments and companies to shut down the botnets (Dittrich, 2012), which has lead the attackers to look for alternative DDoS attack methods. One of the techniques to which attackers are returning to is DDoS amplification attacks. Amplification attacks use intermediate devices called amplifiers in order to amplify the attacker's traffic. This work outlines an evaluation tool and evaluates an amplification attack based on the Trivial File Transfer Proto-col (TFTP). This attack could have amplification factor of approximately 60, which rates highly alongside other researched amplification attacks. This could be a substantial issue globally, due to the fact this protocol is used in approximately 599,600 publicly open TFTP servers. Mitigation methods to this threat have also been consid-ered and a variety of countermeasures are proposed. Effects of this attack on both amplifier and target were analysed based on the proposed metrics. While it has been reported that the breaching of TFTP would be possible (Schultz, 2013), this paper provides a complete methodology for the setup of the attack, and its verification.
Resumo:
Low-Power and Lossy-Network (LLN) are usually composed of static nodes, but the increase demand for mobility in mobile robotic and dynamic environment raises the question how a routing protocol for low-power and lossy-networks such as (RPL) would perform if a mobile sink is deployed. In this paper we investigate and evaluate the behaviour of the RPL protocol in fixed and mobile sink environments with respect to different network metrics such as latency, packet delivery ratio (PDR) and energy consumption. Extensive simulation using instant Contiki simulator show significant performance differences between fixed and mobile sink environments. Fixed sink LLNs performed better in terms of average power consumption, latency and packet delivery ratio. The results demonstrated also that RPL protocol is sensitive to mobility and it increases the number of isolated nodes.
Resumo:
In this paper metrics for assessing the performance of directional modulation (DM) physical-layer secure wireless systems are discussed. In the paper DM systems are shown to be categorized as static or dynamic. The behavior of each type of system is discussed for QPSK modulation. Besides EVM-like and BER metrics, secrecy rate as used in information theory community is also derived for the purpose of this QPSK DM system evaluation.