239 resultados para metrics
Resumo:
Earthwork planning has been considered in this article and a generic block partitioning and modelling approach has been devised to provide strategic plans of various levels of detail. Conceptually this approach is more accurate and comprehensive than others, for instance those that are section based. In response to environmental concerns the metric for decision making was fuel consumption and emissions. Haulage distance and gradient are also included as they are important components of these metrics. Advantageously the fuel consumption metric is generic and captures the physical difficulties of travelling over inclines of different gradients, that is consistent across all hauling vehicles. For validation, the proposed models and techniques have been applied to a real world road project. The numerical investigations have demonstrated that the models can be solved with relatively little CPU time. The proposed block models also result in solutions of superior quality, i.e. they have reduced fuel consumption and cost. Furthermore the plans differ considerably from those based solely upon a distance based metric thus demonstrating a need for industry to reflect upon their current practices.
Resumo:
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.
Resumo:
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.
Resumo:
Interdisciplinary research is often funded by national government initiatives or large corporate sponsorship, and as such, demands periodic reporting on the use of those funds. For reasons of accountability, governance and communication to the tax payer, knowledge of the outcomes of the research need to be measured and understood. The interdisciplinary approach to research raises many challenges for impact reporting. This presentation will consider what are the best practice workflow models and methodologies.Novel methodologies that can be added to the usual metrics of academic publications include analysis of percentage share of total publications in a subject or keyword field, calculating most cited publication in a key phrase category, analysis of who has cited or reviewed the work, and benchmarking of this data against others in that same category. At QUT, interest in how collaborative networking is trending in a research theme has led to the creation of some useful co-authorship graphs that demonstrate the network positions of authors and the strength of their scientific collaborations within a group. The scale of international collaborations is also worth including in the assessment. However, despite all of the tools and techniques available, the most useful way a researcher can help themselves and the process is to set up and maintain their researcher identifier and profile.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
This work aims to contribute to reliability and integrity in perceptual systems of autonomous ground vehicles. Information theoretic based metrics to evaluate the quality of sensor data are proposed and applied to visual and infrared camera images. The contribution of the proposed metrics to the discrimination of challenging conditions is discussed and illustrated with the presence of airborne dust and smoke.
Resumo:
The global financial crisis (GFC) in 2008 rocked local, regional, and state economies throughout the world. Several intermediate outcomes of the GFC have been well documented in the literature including loss of jobs and reduced income. Relatively little research has, however, examined the impacts of the GFC on individual level travel behaviour change. To address this shortcoming, HABITAT panel data were employed to estimate a multinomial logit model to examine mode switching behaviour between 2007 (pre-GFC) and 2009 (post-GFC) of a baby boomers cohort in Brisbane, Australia—a city within a developed country that has been on many metrics the least affected by the GFC. In addition, a Poisson regression model was estimated to model the number of trips made by individuals in 2007, 2008, and 2009. The South East Queensland Travel Survey datasets were used to develop this model. Four linear regression models were estimated to assess the effects of the GFC on time allocated to travel during a day: one for each of the three travel modes including public transport, active transport, less environmentally friendly transport; and an overall travel time model irrespective of mode. The results reveal that individuals were more likely to switch to public transport who lost their job or whose income reduced between 2007 and 2009. Individuals also made significantly fewer trips in 2008 and 2009 compared to 2007. Individuals spent significantly less time using less environmentally friendly transport but more time using public transport in 2009. Baby boomers switched to more environmentally friendly travel modes during the GFC.
Resumo:
IEEE 802.11p is the new standard for intervehicular communications (IVC) using the 5.9 GHz frequency band; it is planned to be widely deployed to enable cooperative systems. 802.11p uses and performance have been studied theoretically and in simulations over the past years. Unfortunately, many of these results have not been confirmed by on-tracks experimentation. In this paper, we describe field trials of 802.11p technology with our test vehicles; metrics such as maximum range, latency and frame loss are examined. Then, we propose a detailed modelisation of 802.11p that can be used to accurately simulate its performance within Cooperative Systems (CS) applications.
Resumo:
Mosquito-borne diseases pose some of the greatest challenges in public health, especially in tropical and sub-tropical regions of theworld. Efforts to control these diseases have been underpinned by a theoretical framework developed for malaria by Ross and Macdonald, including models, metrics for measuring transmission, and theory of control that identifies key vulnerabilities in the transmission cycle. That framework, especially Macdonald’s formula for R0 and its entomological derivative, vectorial capacity, are nowused to study dynamics and design interventions for many mosquito-borne diseases. A systematic review of 388 models published between 1970 and 2010 found that the vast majority adopted the Ross–Macdonald assumption of homogeneous transmission in a well-mixed population. Studies comparing models and data question these assumptions and point to the capacity to model heterogeneous, focal transmission as the most important but relatively unexplored component in current theory. Fine-scale heterogeneity causes transmission dynamics to be nonlinear, and poses problems for modeling, epidemiology and measurement. Novel mathematical approaches show how heterogeneity arises from the biology and the landscape on which the processes of mosquito biting and pathogen transmission unfold. Emerging theory focuses attention on the ecological and social context formosquito blood feeding, themovement of both hosts and mosquitoes, and the relevant spatial scales for measuring transmission and for modeling dynamics and control.
Resumo:
Objective This article explores patterns of terrorist activity over the period from 2000 through 2010 across three target countries: Indonesia, the Philippines and Thailand. Methods We use self-exciting point process models to create interpretable and replicable metrics for three key terrorism concepts: risk, resilience and volatility, as defined in the context of terrorist activity. Results Analysis of the data shows significant and important differences in the risk, volatility and resilience metrics over time across the three countries. For the three countries analysed, we show that risk varied on a scale from 0.005 to 1.61 “expected terrorist attacks per day”, volatility ranged from 0.820 to 0.994 “additional attacks caused by each attack”, and resilience, as measured by the number of days until risk subsides to a pre-attack level, ranged from 19 to 39 days. We find that of the three countries, Indonesia had the lowest average risk and volatility, and the highest level of resilience, indicative of the relatively sporadic nature of terrorist activity in Indonesia. The high terrorism risk and low resilience in the Philippines was a function of the more intense, less clustered pattern of terrorism than what was evident in Indonesia. Conclusions Mathematical models hold great promise for creating replicable, reliable and interpretable “metrics” to key terrorism concepts such as risk, resilience and volatility.
Resumo:
Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across 5 centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity-modulated radiotherapy (IMRT) and 47 treated with volumetric-modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organ-at-risk sparing, through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each organ-at-risk. Statistical significance was evaluated using two-tailed Welch’s T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the organs-at-risk: with increased compliance with recommended organ-at-risk dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.
Resumo:
This paper presents a method for the continuous segmentation of dynamic objects using only a vehicle mounted monocular camera without any prior knowledge of the object’s appearance. Prior work in online static/dynamic segmentation is extended to identify multiple instances of dynamic objects by introducing an unsupervised motion clustering step. These clusters are then used to update a multi-class classifier within a self-supervised framework. In contrast to many tracking-by-detection based methods, our system is able to detect dynamic objects without any prior knowledge of their visual appearance shape or location. Furthermore, the classifier is used to propagate labels of the same object in previous frames, which facilitates the continuous tracking of individual objects based on motion. The proposed system is evaluated using recall and false alarm metrics in addition to a new multi-instance labelled dataset to evaluate the performance of segmenting multiple instances of objects.