34 resultados para Least-cost-variance methodology
em CentAUR: Central Archive University of Reading - UK
Resumo:
Red tape is not desirable as it impedes business growth. Relief from the administrative burdens that businesses face due to legislation can benefit the whole economy, especially at times of recession. However, recent governmental initiatives aimed at reducing administrative burdens have encountered some success, but also failures. This article compares three national initiatives - in the Netherlands, UK and Italy - aimed at cutting red tape by using the Standard Cost Model. Findings highlight the factors affecting the outcomes of measurement and reduction plans and ways to improve the Standard Cost Model methodology.
Resumo:
When formulating least-cost poultry diets, ME concentration should be optimised by an iterative procedure, not entered as a fixed value. This iteration must calculate profit margins by taking into account the way in which feed intake and saleable outputs vary with ME concentration. In the case of broilers, adjustment of critical amino acid contents in direct proportion to ME concentration does not result in birds of equal fatness. To avoid an increase in fat deposition at higher energy levels, it is proposed that amino acid specifications should be adjusted in proportion to changes in the net energy supplied by the feed. A model is available which will both interpret responses to amino acids in laying trials and give economically optimal estimates of amino acid inputs for practical feed formulation. Flocks coming into lay and flocks nearing the end of the pullet year have bimodal distributions of rates of lay, with the result that calculations of requirement based on mean output will underestimate the optimal amino acid input for the flock. Chick diets containing surplus protein can lead to impaired utilisation of the first-limiting amino acid. This difficulty can be avoided by stating amino acid requirements as a proportion of the protein.
Resumo:
Johne's disease in cattle is a contagious wasting disease caused by Mycobacterium avium subspecies paratuberculosis (MAP). Johne's infection is characterised by a long subclinical phase and can therefore go undetected for long periods of time during which substantial production losses can occur. The protracted nature of Johne's infection therefore presents a challenge for both veterinarians and farmers when discussing control options due to a paucity of information and limited test performance when screening for the disease. The objectives were to model Johne's control decisions in suckler beef cattle using a decision support approach, thus implying equal focus on ‘end user’ (veterinarian) participation whilst still focusing on the technical disease modelling aspects during the decision support model development. The model shows how Johne's disease is likely to affect a herd over time both in terms of physical and financial impacts. In addition, the model simulates the effect on production from two different Johne's control strategies; herd management measures and test and cull measures. The article also provides and discusses results from a sensitivity analysis to assess the effects on production from improving the currently available test performance. Output from running the model shows that a combination of management improvements to reduce routes of infection and testing and culling to remove infected and infectious animals is likely to be the least-cost control strategy.
Resumo:
Purpose – This paper describes visitors' reactions to using an Apple iPad or smartphone to follow trails in a museum by scanning QR codes and draws conclusions on the potential for this technology to help improve accessibility at low-cost. Design/methodology/approach – Activities were devised which involved visitors following trails around museum objects, each labelled with a QR code and symbolised text. Visitors scanned the QR codes using a mobile device which then showed more information about an object. Project-team members acted as participant-observers, engaging with visitors and noting how they used the system. Experiences from each activity fed into the design of the next. Findings – Some physical and technical problems with using QR codes can be overcome with the introduction of simple aids, particularly using movable object labels. A layered approach to information access is possible with the first layer comprising a label, the second a mobile-web enabled screen and the third choices of text, pictures, video and audio. Video was especially appealing to young people. The ability to repeatedly watch video or listen to audio seemed to be appreciated by visitors with learning disabilities. This approach can have low equipment-cost. However, maintaining the information behind labels and keeping-up with technological changes are on-going processes. Originality/value – Using QR codes on movable, symbolised object labels as part of a layered information system might help modestly-funded museums enhance their accessibility, particularly as visitors increasingly arrive with their own smartphones or tablets.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the subset selection cost function includes an A-optimality design criterion to minimize the variance of the parameter estimates that ensures the adequacy and parsimony of the final model. An illustrative example is included to demonstrate the effectiveness of the new approach.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.
Resumo:
Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
Most building services products are installed while a building is constructed, but they are not operated until the building is commissioned. The warranty of the products may cover the time starting from their installation to the end of the warranty period. Prior to the commissioning of the building, the products are at a dormant mode (i.e., not operated) but protected by the warranty. For such products, both the usage intensity and the failure patterns are different from those with continuous usage intensity and failure patterns. This paper develops warranty cost models for repairable products with a dormant mode from both the manufacturer's and buyer's perspectives. Relationships between the failure patterns at the dormant mode and at the operational mode are also discussed. Numerical examples and sensitivity analysis are used to demonstrate the applicability of the methodology derived in the paper.
Resumo:
OBJECTIVES: To determine the cost-effectiveness of influenza vaccination in people aged 65-74 years in the absence of co-morbidity. DESIGN: Primary research: randomised controlled trial. SETTING: Primary care. PARTICIPANTS: People without risk factors for influenza or contraindications to vaccination were identified from 20 general practitioner (GP) practices in Liverpool in September 1999 and invited to participate in the study. There were 5875/9727 (60.4%) people aged 65-74 years identified as potentially eligible and, of these, 729 (12%) were randomised. INTERVENTION: Participants were randomised to receive either influenza vaccine or placebo (ratio 3:1), with all individuals receiving pneumococcal vaccine unless administered in the previous 10 years. Of the 729 people randomised, 552 received vaccine and 177 received placebo; 726 individuals were administered pneumococcal vaccine. MAIN OUTCOME MEASURES AND METHODOLOGY OF ECONOMIC EVALUATION: GP attendance with influenza-like illness (ILI) or pneumonia (primary outcome measure); or any respiratory symptoms; hospitalisation with a respiratory illness; death; participant self-reported ILI; quality of life (QoL) measures at 2, 4 and 6 months post-study vaccination; adverse reactions 3 days after vaccination. A cost-effectiveness analysis was undertaken to identify the incremental cost associated with the avoidance of episodes of influenza in the vaccination population and an impact model was used to extrapolate the cost-effectiveness results obtained from the trial to assess their generalisability throughout the NHS. RESULTS: In England and Wales, weekly consultations for influenza and ILI remained at baseline levels (less than 50 per 100,000 population) until week 50/1999 and then increased rapidly, peaking during week 2/2000 with a rate of 231/100,000. This rate fell within the range of 'higher than expected seasonal activity' of 200-400/100,000. Rates then quickly declined, returning to baseline levels by week 5/2000. The predominant circulating strain during this period was influenza A (H3N2). Five (0.9%) people in the vaccine group were diagnosed by their GP with an ILI compared to two (1.1%) in the placebo group [relative risk (RR), 0.8; 95% confidence interval (CI) = 0.16 to 4.1]. No participants were diagnosed with pneumonia by their GP and there were no hospitalisations for respiratory illness in either group. Significantly fewer vaccinated individuals self-reported a single ILI (4.6% vs 8.9%, RR, 0.51; 95% CI for RR, 0.28 to 0.96). There was no significant difference in any of the QoL measurements over time between the two groups. Reported systemic side-effects showed no significant differences between groups. Local side-effects occurred with a significantly increased incidence in the vaccine group (11.3% vs 5.1%, p = 0.02). Each GP consultation avoided by vaccination was estimated from trial data to generate a net NHS cost of 174 pounds. CONCLUSIONS: No difference was seen between groups for the primary outcome measure, although the trial was underpowered to demonstrate a true difference. Vaccination had no significant effect on any of the QoL measures used, although vaccinated individuals were less likely to self-report ILI. The analysis did not suggest that influenza vaccination in healthy people aged 65-74 years would lead to lower NHS costs. Future research should look at ways to maximise vaccine uptake in people at greatest risk from influenza and also the level of vaccine protection afforded to people from different age and socio-economic populations.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression application. The locally regularised orthogonal least squares (LROLS) algorithm with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF network models, is extended to the fully complex-valued RBF network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully complex-valued RBF network.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.