9 resultados para Metrics of managment
em WestminsterResearch - UK
Resumo:
The advantages a DSL and the benefits its use potentially brings imply that informed decisions on the design of a domain specific language are of paramount importance for its use. We believe that the foundations of such decisions should be informed by analysis of data empirically collected from systems to highlight salient features that should then form the basis of a DSL. To support this theory, we describe an empirical study of a large OSS called Barcode, written in C, and from which we collected two well-known 'slice' based metrics. We analyzed multiple versions of the system and sliced its functions in three separate ways (i.e., input, output and global variables). The purpose of the study was to try and identify sensitivities and traits in those metrics that might inform features of a potential slice-based DSL. Results indicated that cohesion was adversely affected through the use of global variables and that appreciation of the role of function inputs and outputs can be revealed through slicing. The study presented is motivated primarily by the problems with current tools and interfaces experienced directly by the authors in extracting slicing data and the need to promote the benefits that analysis of slice data and slicing in general can offer.
Resumo:
Estimates of airline delay costs as a function of delay magnitude are combined with fuel and (future) emissions charges to make cost-benefit trade-offs in the pre-departure and airborne phases. Hypothetical scenarios for the distribution of flow management slots are explored in terms of their cost and target-setting implications. The general superiority of passenger-centric metrics is of significance for delay measurement, although flight delays are still the only commonly-reported type of metric in both the US and Europe. There is a particular need for further research into reactionary (network) effects, especially with regard to passenger metrics and flow management delay.
Resumo:
Reactionary delays constitute nearly half of all delay minutes in Europe. A capped, multi-component model is presented for estimating reactionary delay costs, as a non-linear function of primary delay duration. Maximum Take-Off Weights, historically established as a charging mechanism, may be used to model delay costs. Current industry reporting on delay is flight-centric. Passenger-centric metrics are needed to better understand delay propagation. In ATM, it is important to take account of contrasting flight- and passenger-centric effects, caused by cancellations, for example. Costs to airlines and passenger disutility will both continue to be driven by delay relative to the original schedule.
Resumo:
What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies.
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.
Resumo:
Air traffic management research lacks a framework for modelling the cost of resilience during disturbance. There is no universally accepted metric for cost resilience. The design of such a framework is presented and the modelling to date is reported. The framework allows performance assessment as a function of differential stakeholder uptake of strategic mechanisms designed to mitigate disturbance. Advanced metrics, cost- and non-cost-based, disaggregated by stakeholder sub-types, are described. A new cost resilience metric is proposed and exemplified with early test data.
Resumo:
Complex network theory is a framework increasingly used in the study of air transport networks, thanks to its ability to describe the structures created by networks of flights, and their influence in dynamical processes such as delay propagation. While many works consider only a fraction of the network, created by major airports or airlines, for example, it is not clear if and how such sampling process bias the observed structures and processes. In this contribution, we tackle this problem by studying how some observed topological metrics depend on the way the network is reconstructed, i.e. on the rules used to sample nodes and connections. Both structural and simple dynamical properties are considered, for eight major air networks and different source datasets. Results indicate that using a subset of airports strongly distorts our perception of the network, even when just small ones are discarded; at the same time, considering a subset of airlines yields a better and more stable representation. This allows us to provide some general guidelines on the way airports and connections should be sampled.
Resumo:
As identified by Griffin (1997) and Kahn (2012), manufacturing organisations typically improve their market position by accelerating their product development (PD) cycles. One method for achieving this is to reduce the time taken to design, test and validate new products, so that they can reach the end customer before competition. This paper adds to existing research on PD testing procedures by reporting on an exploratory investigation carried out in a UK-based manufacturing plant. We explore the organisational and managerial factors that contribute to the time spent on testing of new products during development. The investigation consisted of three sections, viz. observations and process modelling, utilisation metrics and a questionnaire-based investigation, from which a proposed framework to improve and reduce the PD time cycle is presented. This research focuses specifically on the improvement of the utilisation of product testing facilities and the links to its main internal stakeholders - PD engineers.
Resumo:
Introduction Quantitative and accurate measurements of fat and muscle in the body are important for prevention and diagnosis of diseases related to obesity and muscle degeneration. Manually segmenting muscle and fat compartments in MR body-images is laborious and time-consuming, hindering implementation in large cohorts. In the present study, the feasibility and success-rate of a Dixon-based MR scan followed by an intensity-normalised, non-rigid, multi-atlas based segmentation was investigated in a cohort of 3,000 subjects. Materials and Methods 3,000 participants in the in-depth phenotyping arm of the UK Biobank imaging study underwent a comprehensive MR examination. All subjects were scanned using a 1.5 T MR-scanner with the dual-echo Dixon Vibe protocol, covering neck to knees. Subjects were scanned with six slabs in supine position, without localizer. Automated body composition analysis was performed using the AMRA Profiler™ system, to segment and quantify visceral adipose tissue (VAT), abdominal subcutaneous adipose tissue (ASAT) and thigh muscles. Technical quality assurance was performed and a standard set of acceptance/rejection criteria was established. Descriptive statistics were calculated for all volume measurements and quality assurance metrics. Results Of the 3,000 subjects, 2,995 (99.83%) were analysable for body fat, 2,828 (94.27%) were analysable when body fat and one thigh was included, and 2,775 (92.50%) were fully analysable for body fat and both thigh muscles. Reasons for not being able to analyse datasets were mainly due to missing slabs in the acquisition, or patient positioned so that large parts of the volume was outside of the field-of-view. Discussion and Conclusions In conclusion, this study showed that the rapid UK Biobank MR-protocol was well tolerated by most subjects and sufficiently robust to achieve very high success-rate for body composition analysis. This research has been conducted using the UK Biobank Resource.