48 resultados para Evaluation of organizational performance
Resumo:
Determination of varicella zoster virus (VZV) immunity in healthcare workers without a history of chickenpox is important for identifying those in need of vOka vaccination. Post immunisation, healthcare workers in the UK who work with high risk patients are tested for seroconversion. To assess the performance of the time-resolved fluorescence immunoassay (TRFIA) for the detection of antibody in vaccinated as well as unvaccinated individuals, a cut-off was first calculated. VZV-IgG specific avidity and titres six weeks after the first dose of vaccine were used to identify subjects with pre-existing immunity among a cohort of 110 healthcare workers. Those with high avidity (≥60%) were considered to have previous immunity to VZV and those with low or equivocal avidity (<60%) were considered naive. The former had antibody levels ≥400mIU/mL and latter had levels <400mIU/mL. Comparison of the baseline values of the naive and immune groups allowed the estimation of a TRFIA cut-off value of >130mIU/mL which best discriminated between the two groups and this was confirmed by ROC analysis. Using this value, the sensitivity and specificity of TRFIA cut-off were 90% (95% CI 79-96), and 78% (95% CI 61-90) respectively in this population. A subset of samples tested by the gold standard Fluorescence Antibody to Membrane Antigen (FAMA) test showed 84% (54/64) agreement with TRFIA.
Resumo:
Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.
Resumo:
Integrated Arable Farming Systems (IAFS), which involve a reduction in the use of off-farm inputs, are attracting considerable research interest in the UK. The objectives of these systems experiments are to compare their financial performance with that from conventional or current farming practices. To date, this comparison has taken little account of any environmental benefits (or disbenefits) of the two systems. The objective of this paper is to review the assessment methodologies available for the analysis of environmental impacts. To illustrate the results of this exercise, the methodology and environmental indicators chosen are then applied to data from one of the LINK - Integrated Farming Systems experimental sites. Data from the Pathhead site in Southern Scotland are used to evaluate the use of invertebrates and nitrate loss as environmental indicators within IAFS. The results suggest that between 1992 and 1995 the biomass of earthworms fell by 28 kg per hectare on the integrated rotation and rose by 31 kg per hectare on the conventional system. This led to environmental costs ranging between £2.24 and £13.44 per hectare for the integrated system and gains of between £2.48 and £14.88 for the conventional system. In terms of nitrate, the integrated system had an estimated loss of £72.21 per hectare in comparison to £149.40 per hectare on the conventional system. Conclusions are drawn about the advantages and disadvantages of this type of analytical framework. Keywords: Farming systems; IAFS; Environmental valuation; Economics; Earthworms; Nitrates; Soil fauna
Resumo:
The prebiotic effect of oligosaccharides recovered and purified from caprine whey, was evaluated by in vitro fermentation under anaerobic conditions using batch cultures at 37ºC with human faeces. Effects on key gut bacterial groups were monitored over 24h by fluorescence in situ hybridisation (FISH), which was used to determine a quantitative prebiotic index score. Production of short-chain fatty acids (SCFAs) as fermentation end products was analysed by high-performance liquid chromatography (HPLC). Growth of Bifidobacterium spp was significantly higher (p ≥ 0.05) with the purified oligosaccharides compared to the negative control. Lactic and propionic acids were the main SCFAs produced. Antimicrobial activity of the oligosaccharides was also tested, revealing no inhibition though a decrease in Staphylococcus aureus and Escherichia coli growth. These findings indicate that naturally extracted oligosaccharides from caprine whey could be used as new and valuable source of prebiotics.
Resumo:
Several methods are examined which allow to produce forecasts for time series in the form of probability assignments. The necessary concepts are presented, addressing questions such as how to assess the performance of a probabilistic forecast. A particular class of models, cluster weighted models (CWMs), is given particular attention. CWMs, originally proposed for deterministic forecasts, can be employed for probabilistic forecasting with little modification. Two examples are presented. The first involves estimating the state of (numerically simulated) dynamical systems from noise corrupted measurements, a problem also known as filtering. There is an optimal solution to this problem, called the optimal filter, to which the considered time series models are compared. (The optimal filter requires the dynamical equations to be known.) In the second example, we aim at forecasting the chaotic oscillations of an experimental bronze spring system. Both examples demonstrate that the considered time series models, and especially the CWMs, provide useful probabilistic information about the underlying dynamical relations. In particular, they provide more than just an approximation to the conditional mean.
Resumo:
There is large uncertainty about the magnitude of warming and how rainfall patterns will change in response to any given scenario of future changes in atmospheric composition and land use. The models used for future climate projections were developed and calibrated using climate observations from the past 40 years. The geologic record of environmental responses to climate changes provides a unique opportunity to test model performance outside this limited climate range. Evaluation of model simulations against palaeodata shows that models reproduce the direction and large-scale patterns of past changes in climate, but tend to underestimate the magnitude of regional changes. As part of the effort to reduce model-related uncertainty and produce more reliable estimates of twenty-first century climate, the Palaeoclimate Modelling Intercomparison Project is systematically applying palaeoevaluation techniques to simulations of the past run with the models used to make future projections. This evaluation will provide assessments of model performance, including whether a model is sufficiently sensitive to changes in atmospheric composition, as well as providing estimates of the strength of biosphere and other feedbacks that could amplify the model response to these changes and modify the characteristics of climate variability.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.
Resumo:
We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.
Resumo:
Providing probabilistic forecasts using Ensemble Prediction Systems has become increasingly popular in both the meteorological and hydrological communities. Compared to conventional deterministic forecasts, probabilistic forecasts may provide more reliable forecasts of a few hours to a number of days ahead, and hence are regarded as better tools for taking uncertainties into consideration and hedging against weather risks. It is essential to evaluate performance of raw ensemble forecasts and their potential values in forecasting extreme hydro-meteorological events. This study evaluates ECMWF’s medium-range ensemble forecasts of precipitation over the period 2008/01/01-2012/09/30 on a selected mid-latitude large scale river basin, the Huai river basin (ca. 270,000 km2) in central-east China. The evaluation unit is sub-basin in order to consider forecast performance in a hydrologically relevant way. The study finds that forecast performance varies with sub-basin properties, between flooding and non-flooding seasons, and with the forecast properties of aggregated time steps and lead times. Although the study does not evaluate any hydrological applications of the ensemble precipitation forecasts, its results have direct implications in hydrological forecasts should these ensemble precipitation forecasts be employed in hydrology.
Resumo:
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
Resumo:
Ground-based remote-sensing observations from Atmospheric Radiation Measurement (ARM) and Cloud-Net sites are used to evaluate the clouds predicted by a weather forecasting and climate model. By evaluating the cloud predictions using separate measures for the errors in frequency of occurrence, amount when present, and timing, we provide a detailed assessment of the model performance, which is relevant to weather and climate time-scales. Importantly, this methodology will be of great use when attempting to develop a cloud parametrization scheme, as it provides a clearer picture of the current deficiencies in the predicted clouds. Using the Met Office Unified Model, it is shown that when cloud fractions produced by a diagnostic and a prognostic cloud scheme are compared, the prognostic cloud scheme shows improvements to the biases in frequency of occurrence of low, medium and high cloud and to the frequency distributions of cloud amount when cloud is present. The mean cloud profiles are generally improved, although it is shown that in some cases the diagnostic scheme produced misleadingly good mean profiles as a result of compensating errors in frequency of occurrence and amount when present. Some biases remain when using the prognostic scheme, notably the underprediction of mean ice cloud fraction due to the amount when present being too low, and the overprediction of mean liquid cloud fraction due to the frequency of occurrence being too high.
Resumo:
We report a straightforward methodology for the fabrication of high-temperature thermoelectric (TE) modules using commercially available solder alloys and metal barriers. This methodology employs standard and accessible facilities that are simple to implement in any laboratory. A TE module formed by nine n-type Yb x Co4Sb12 and p-type Ce x Fe3CoSb12 state-of-the-art skutterudite material couples was fabricated. The physical properties of the synthesized skutterudites were determined, and the module power output, internal resistance, and thermocycling stability were evaluated in air. At a temperature difference of 365 K, the module provides more than 1.5 W cm−3 volume power density. However, thermocycling showed an increase of the internal module resistance and degradation in performance with the number of cycles when the device is operated at a hot-side temperature higher than 573 K. This may be attributed to oxidation of the skutterudite thermoelements.
Resumo:
The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.
Resumo:
Different treatments that could be implemented in the home environ-ment are evaluated with the objective of reaching a more rational and efficient use of energy. We consider that a detailed knowledge of energy-consuming behaviour is paramount for the development and implementation of new technologies, services and even policies that could result in more rational energy use. The proposed evaluation methodology is based on the development of economic experiments implemented in an experimental economics laboratory, where the behaviour of individuals when making decisions related to energy use in the domestic environment can be tested.