151 resultados para Classical measurement error model
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
Recent studies into price transmission have recognized the important role played by transport and transaction costs. Threshold models are one approach to accommodate such costs. We develop a generalized Threshold Error Correction Model to test for the presence and form of threshold behavior in price transmission that is symmetric around equilibrium. We use monthly wheat, maize, and soya prices from the United States, Argentina, and Brazil to demonstrate this model. Classical estimation of these generalized models can present challenges but Bayesian techniques avoid many of these problems. Evidence for thresholds is found in three of the five commodity price pairs investigated.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Grass-based diets are of increasing social-economic importance in dairy cattle farming, but their low supply of glucogenic nutrients may limit the production of milk. Current evaluation systems that assess the energy supply and requirements are based on metabolisable energy (ME) or net energy (NE). These systems do not consider the characteristics of the energy delivering nutrients. In contrast, mechanistic models take into account the site of digestion, the type of nutrient absorbed and the type of nutrient required for production of milk constituents, and may therefore give a better prediction of supply and requirement of nutrients. The objective of the present study is to compare the ability of three energy evaluation systems, viz. the Dutch NE system, the agricultural and food research council (AFRC) ME system, and the feed into milk (FIM) ME system, and of a mechanistic model based on Dijkstra et al. [Simulation of digestion in cattle fed sugar cane: prediction of nutrient supply for milk production with locally available supplements. J. Agric. Sci., Cambridge 127, 247-60] and Mills et al. [A mechanistic model of whole-tract digestion and methanogenesis in the lactating dairy cow: model development, evaluation and application. J. Anim. Sci. 79, 1584-97] to predict the feed value of grass-based diets for milk production. The dataset for evaluation consists of 41 treatments of grass-based diets (at least 0.75 g ryegrass/g diet on DM basis). For each model, the predicted energy or nutrient supply, based on observed intake, was compared with predicted requirement based on observed performance. Assessment of the error of energy or nutrient supply relative to requirement is made by calculation of mean square prediction error (MSPE) and by concordance correlation coefficient (CCC). All energy evaluation systems predicted energy requirement to be lower (6-11%) than energy supply. The root MSPE (expressed as a proportion of the supply) was lowest for the mechanistic model (0.061), followed by the Dutch NE system (0.082), FIM ME system (0.097) and AFRCME system(0.118). For the energy evaluation systems, the error due to overall bias of prediction dominated the MSPE, whereas for the mechanistic model, proportionally 0.76 of MSPE was due to random variation. CCC analysis confirmed the higher accuracy and precision of the mechanistic model compared with energy evaluation systems. The error of prediction was positively related to grass protein content for the Dutch NE system, and was also positively related to grass DMI level for all models. In conclusion, current energy evaluation systems overestimate energy supply relative to energy requirement on grass-based diets for dairy cattle. The mechanistic model predicted glucogenic nutrients to limit performance of dairy cattle on grass-based diets, and proved to be more accurate and precise than the energy systems. The mechanistic model could be improved by allowing glucose maintenance and utilization requirements parameters to be variable. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.
Resumo:
The aim of the study was to establish and verify a predictive vegetation model for plant community distribution in the alti-Mediterranean zone of the Lefka Ori massif, western Crete. Based on previous work three variables were identified as significant determinants of plant community distribution, namely altitude, slope angle and geomorphic landform. The response of four community types against these variables was tested using classification trees analysis in order to model community type occurrence. V-fold cross-validation plots were used to determine the length of the best fitting tree. The final 9node tree selected, classified correctly 92.5% of the samples. The results were used to provide decision rules for the construction of a spatial model for each community type. The model was implemented within a Geographical Information System (GIS) to predict the distribution of each community type in the study site. The evaluation of the model in the field using an error matrix gave an overall accuracy of 71%. The user's accuracy was higher for the Crepis-Cirsium (100%) and Telephium-Herniaria community type (66.7%) and relatively lower for the Peucedanum-Alyssum and Dianthus-Lomelosia community types (63.2% and 62.5%, respectively). Misclassification and field validation points to the need for improved geomorphological mapping and suggests the presence of transitional communities between existing community types.
Resumo:
Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Kinetic studies on the AR (aldose reductase) protein have shown that it does not behave as a classical enzyme in relation to ring aldose sugars. As with non-enzymatic glycation reactions, there is probably a free radical element involved derived from monosaccharide autoxidation. in the case of AR, there is free radical oxidation of NADPH by autoxidizing monosaccharides, which is enhanced in the presence of the NADPH-binding protein. Thus any assay for AR based on the oxidation of NADPH in the presence of autoxidizing monosaccharides is invalid, and tissue AR measurements based on this method are also invalid, and should be reassessed. AR exhibits broad specificity for both hydrophilic and hydrophobic aldehydes that suggests that the protein may be involved in detoxification. The last thing we would want to do is to inhibit it. ARIs (AR inhibitors) have a number of actions in the cell which are not specific, and which do not involve them binding to AR. These include peroxy-radical scavenging and effects of metal ion chelation. The AR/ARI story emphasizes the importance of correct experimental design in all biocatalytic experiments. Developing the use of Bayesian utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has led to the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-m and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimizes the error in the parameters estimated, and is suitable for simple or complex steady-state models.
Resumo:
Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.
Resumo:
In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
A model for comparing the inventory costs of purchasing under the economic order quantity (EOQ) system and the just-in-time (JIT) order purchasing system in existing literature concluded that JIT purchasing was virtually always the preferable inventory ordering system especially at high level of annual demand. By expanding the classical EOQ model, this paper shows that it is possible for the EOQ system to be more cost effective than the JIT system once the inventory demand approaches the EOQ-JIT cost indifference point. The case study conducted in the ready-mixed concrete industry in Singapore supported this proposition.
Resumo:
Individuals with elevated levels of plasma low density lipoprotein (LDL) cholesterol (LDL-C) are considered to be at risk of developing coronary heart disease. LDL particles are removed from the blood by a process known as receptor-mediated endocytosis, which occurs mainly in the liver. A series of classical experiments delineated the major steps in the endocytotic process; apolipoprotein B-100 present on LDL particles binds to a specific receptor (LDL receptor, LDL-R) in specialized areas of the cell surface called clathrin-coated pits. The pit comprising the LDL-LDL-R complex is internalized forming a cytoplasmic endosome. Fusion of the endosome with a lysosome leads to degradation of the LDL into its constituent parts (that is, cholesterol, fatty acids, and amino acids), which are released for reuse by the cell, or are excreted. In this paper, we formulate a mathematical model of LDL endocytosis, consisting of a system of ordinary differential equations. We validate our model against existing in vitro experimental data, and we use it to explore differences in system behavior when a single bolus of extracellular LDL is supplied to cells, compared to when a continuous supply of LDL particles is available. Whereas the former situation is common to in vitro experimental systems, the latter better reflects the in vivo situation. We use asymptotic analysis and numerical simulations to study the longtime behavior of model solutions. The implications of model-derived insights for experimental design are discussed.