880 resultados para Evaluation models
Resumo:
The results from three types of study with broilers, namely nitrogen (N) balance, bioassays and growth experiments, provided the data used herein. Sets of data on N balance and protein accretion (bioassay studies) were used to assess the ability of the monomolecular equation to describe the relationship between (i) N balance and amino acid (AA) intake and (ii) protein accretion and AA intake. The model estimated the levels of isoleucine, lysine, valine, threonine, methionine, total sulphur AAs and tryptophan resulting in zero balance to be 58, 59, 80, 96, 23, 85 and 32 mg/kg live weight (LW)/day, respectively. These estimates show good agreement with those obtained in previous studies. For the growth experiments, four models, specifically re-parameterized for analysing energy balance data, were evaluated for their ability to determine crude protein (CP) intake at maintenance and efficiency of utilization of CP intake for producing gain. They were: a straight line, two equations representing diminishing returns behaviour (monomolecular and rectangular hyperbola) and one equation describing smooth sigmoidal behaviour with a fixed point of inflexion (Gompertz). The estimates of CP requirement for maintenance and efficiency of utilization of CP intake for producing gain varied from 5.4 to 5.9 g/kg LW/day and 0.60 to 0.76, respectively, depending on the models.
Resumo:
A total of 86 profiles from meat and egg strains of chickens (male and female) were used in this study. Different flexible growth functions were evaluated with regard to their ability to describe the relationship between live weight and age and were compared with the Gompertz and logistic equations, which have a fixed point of inflection. Six growth functions were used: Gompertz, logistic, Lopez, Richards, France, and von Bertalanffy. A comparative analysis was carried out based on model behavior and statistical performance. The results of this study confirmed the initial concern about the limitation of a fixed point of inflection, such as in the Gompertz equation. Therefore, consideration of flexible growth functions as an alternatives to the simpler equations (with a fixed point of inflection) for describing the relationship between live weight and age are recommended for the following reasons: they are easy to fit, they very often give a closer fit to data points because of their flexibility and therefore a smaller RSS value, than the simpler models, and they encompasses simpler models for the addition of an extra parameter, which is especially important when the behavior of a particular data set is not defined previously.
Resumo:
Data from six studies with male broilers fed diets covering a wide range of energy and protein were used in the current two analyses. In the first analysis, five models, specifically re-parameterized for analysing energy balance data, were evaluated for their ability to determine metabolizable energy intake at maintenance and efficiency of utilization of metabolizable energy intake for producing gain. In addition to the straight line, two types of functional form were used. They were forms describing (i) diminishing returns behaviour (monomolecular and rectangular hyperbola) and (ii) sigmoidal behaviour with a fixed point of inflection (Gompertz and logistic). These models determined metabolizable energy requirement for maintenance to be in the range 437-573 kJ/kg of body weight/day depending on the model. The values determined for average net energy requirement for body weight gain varied from 7(.)9 to 11(.)2 kJ/g of body weight. These values show good agreement with previous studies. In the second analysis, three types of function were assessed as candidates for describing the relationship between body weight and cumulative metabolizable energy intake. The functions used were: (a) monomolecular (diminishing returns behaviour), (b) Gompertz (smooth sigmoidal behaviour with a fixed point of inflection) and (c) Lopez, France and Richards (diminishing returns and sigmoidal behaviour with a variable point of inflection). The results of this analysis demonstrated that equations capable of mimicking the law of diminishing returns describe accurately the relationship between body weight and cumulative metabolizable energy intake in broilers.
Resumo:
The suitability of models specifically re-parameterized for analyzing energy balance data relating metabolizable energy intake to growth rate has recently been investigated in male broilers. In this study, the more adequate of those models was applied to growing turkeys to provide estimates of their energy needs for maintenance and growth. Three functional forms were used. They were: two equations representing diminishing returns behaviour (monomolecular and rectangular hyperbola); and one equation describing smooth sigmoidal behaviour with a fixed point of inflexion (Gompertz). The models estimated the metabolizable energy requirement for maintenance in turkeys to be 359-415 kJ/kg of live-weight/day. The predicted values of average net energy requirement for producing 1 g of gain in live-weight, between 1 and 4 times maintenance, varied from 8.7 to 10.9 kJ. These results and those previously reported for broilers are a basis for accepting the general validity of these models.
Resumo:
The effectiveness of development assistance has come under renewed scrutiny in recent years. In an era of growing economic liberalisation, research organisations are increasingly being asked to account for the use of public funds by demonstrating achievements. However, in the natural resources (NR) research field, conventional economic assessment techniques have focused on quantifying the impact achieved rather understanding the process that delivered it. As a result, they provide limited guidance for planners and researchers charged with selecting and implementing future research. In response, “pathways” or logic models have attracted increased interest in recent years as a remedy to this shortcoming. However, as commonly applied these suffer from two key limitations in their ability to incorporate risk and assess variance from plan. The paper reports the results of a case study that used a Bayesian belief network approach to address these limitations and outlines its potential value as a tool to assist the planning, monitoring and evaluation of development-orientated research.
Resumo:
A method was developed to evaluate crop disease predictive models for their economic and environmental benefits. Benefits were quantified as the value of a prediction measured by costs saved and fungicide dose saved. The value of prediction was defined as the net gain made by using predictions, measured as the difference between a scenario where predictions are available and used and a scenario without prediction. Comparable 'with' and 'without' scenarios were created with the use of risk levels. These risk levels were derived from a probability distribution fitted to observed disease severities. These distributions were used to calculate the probability that a certain disease induced economic loss was incurred. The method was exemplified by using it to evaluate a model developed for Mycosphaerella graminicola risk prediction. Based on the value of prediction, the tested model may have economic and environmental benefits to growers if used to guide treatment decisions on resistant cultivars. It is shown that the value of prediction measured by fungicide dose saved and costs saved is constant with the risk level. The model could also be used to evaluate similar crop disease predictive models.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
An evaluation of milk urea nitrogen (MUN) as a diagnostic of protein feeding in dairy cows was performed using mean treatment data (n = 306) from 50 production trials conducted in Finland (n = 48) and Sweden (n = 2). Data were used to assess the effects of diet composition and certain animal characteristics on MUN and to derive relationships between MUN and the efficiency of N utilization for milk production and urinary N excretion. Relationships were developed using regression analysis based on either models of fixed factors or using mixed models that account for between-experiment variations. Dietary crude protein (CP) content was the best single predictor of MUN and accounted for proportionately 0.778 of total variance [ MUN (mg/dL) = -14.2 + 0.17 x dietary CP content (g/kg dry matter)]. The proportion of variation explained by this relationship increased to 0.952 when a mixed model including the random effects of study was used, but both the intercept and slope remained unchanged. Use of rumen degradable CP concentration in excess of predicted requirements, or the ratio of dietary CP to metabolizable energy as single predictors, did not explain more of the variation in MUN (R-2 = 0.767 or 0.778, respectively) than dietary CP content. Inclusion of other dietary factors with dietary CP content in bivariate models resulted in only marginally better predictions of MUN (R-2 = 0.785 to 0.804). Closer relationships existed between MUN and dietary factors when nutrients (CP to metabolizable energy) were expressed as concentrations in the diet, rather than absolute intakes. Furthermore, both MUN and MUN secretion (g/d) provided more accurate predictions of urinary N excretion (R-2 = 0.787 and 0.835, respectively) than measurements of the efficiency of N utilization for milk production (R-2 = 0.769). It is concluded that dietary CP content is the most important nutritional factor influencing MUN, and that measurements of MUN can be utilized as a diagnostic of protein feeding in the dairy cow and used to predict urinary N excretion.
Resumo:
This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.
Resumo:
A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.
Resumo:
Although in several EU Member States many public interventions have been running for the prevention and/or management of obesity and other nutrition-related health conditions, few have yet been formally evaluated. The multidisciplinary team of the EATWELL project will gather benchmark data on healthy eating interventions in EU Member States and review existing information on the effectiveness of interventions using a three-stage procedure (i) Assessment of the intervention's impact on consumer attitudes, consumer behaviour and diets; (ii) The impact of the change in diets on obesity and health and (iii) The value attached by society to these changes, measured in life years gained, cost savings and quality-adjusted life years. Where evaluations have been inadequate, EATWELL will gather secondary data and analyse them with a multidisciplinary approach incorporating models from the psychology and economics disciplines. Particular attention will be paid to lessons that can be learned from private sector that are transferable to the healthy eating campaigns in the public sector. Through consumer surveys and workshops with other stakeholders, EATWELL will assess the acceptability of the range of potential interventions. Armed with scientific quantitative evaluations of policy interventions and their acceptability to stakeholders, EATWELL expects to recommend more appropriate interventions for Member States and the EU, providing a one-stop guide to methods and measures in interventions evaluation, and outline data collection priorities for the future.
Resumo:
This paper presents an enhanced hypothesis verification strategy for 3D object recognition. A new learning methodology is presented which integrates the traditional dichotomic object-centred and appearance-based representations in computer vision giving improved hypothesis verification under iconic matching. The "appearance" of a 3D object is learnt using an eigenspace representation obtained as it is tracked through a scene. The feature representation implicitly models the background and the objects observed enabling the segmentation of the objects from the background. The method is shown to enhance model-based tracking, particularly in the presence of clutter and occlusion, and to provide a basis for identification. The unified approach is discussed in the context of the traffic surveillance domain. The approach is demonstrated on real-world image sequences and compared to previous (edge-based) iconic evaluation techniques.
Resumo:
A significant challenge in the prediction of climate change impacts on ecosystems and biodiversity is quantifying the sources of uncertainty that emerge within and between different models. Statistical species niche models have grown in popularity, yet no single best technique has been identified reflecting differing performance in different situations. Our aim was to quantify uncertainties associated with the application of 2 complimentary modelling techniques. Generalised linear mixed models (GLMM) and generalised additive mixed models (GAMM) were used to model the realised niche of ombrotrophic Sphagnum species in British peatlands. These models were then used to predict changes in Sphagnum cover between 2020 and 2050 based on projections of climate change and atmospheric deposition of nitrogen and sulphur. Over 90% of the variation in the GLMM predictions was due to niche model parameter uncertainty, dropping to 14% for the GAMM. After having covaried out other factors, average variation in predicted values of Sphagnum cover across UK peatlands was the next largest source of variation (8% for the GLMM and 86% for the GAMM). The better performance of the GAMM needs to be weighed against its tendency to overfit the training data. While our niche models are only a first approximation, we used them to undertake a preliminary evaluation of the relative importance of climate change and nitrogen and sulphur deposition and the geographic locations of the largest expected changes in Sphagnum cover. Predicted changes in cover were all small (generally <1% in an average 4 m2 unit area) but also highly uncertain. Peatlands expected to be most affected by climate change in combination with atmospheric pollution were Dartmoor, Brecon Beacons and the western Lake District.