28 resultados para In-sample
In vitro cumulative gas production techniques: History, methodological considerations and challenges
Resumo:
Methodology used to measure in vitro gas production is reviewed to determine impacts of sources of variation on resultant gas production profiles (GPP). Current methods include measurement of gas production at constant pressure (e.g., use of gas tight syringes), a system that is inexpensive, but may be less sensitive than others thereby affecting its suitability in some situations. Automated systems that measure gas production at constant volume allow pressure to accumulate in the bottle, which is recorded at different times to produce a GPP, and may result in sufficiently high pressure that solubility of evolved gases in the medium is affected, thereby resulting in a recorded volume of gas that is lower than that predicted from stoichiometric calculations. Several other methods measure gas production at constant pressure and volume with either pressure transducers or sensors, and these may be manual, semi-automated or fully automated in operation. In these systems, gas is released as pressure increases, and vented gas is recorded. Agitating the medium does not consistently produce more gas with automated systems, and little or no effect of agitation was observed with manual systems. The apparatus affects GPP, but mathematical manipulation may enable effects of apparatus to be removed. The amount of substrate affects the volume of gas produced, but not rate of gas production, provided there is sufficient buffering capacity in the medium. Systems that use a very small amount of substrate are prone to experimental error in sample weighing. Effect of sample preparation on GPP has been found to be important, but further research is required to determine the optimum preparation that mimics animal chewing. Inoculum is the single largest source of variation in measuring GPP, as rumen fluid is variable and sampling schedules, diets fed to donor animals and ratios of rumen fluid/medium must be selected such that microbial activity is sufficiently high that it does not affect rate and extent of fermentation. Species of donor animal may also cause differences in GPP. End point measures can be mathematically manipulated to account for species differences, but rates of fermentation are not related. Other sources of inocula that have been used include caecal fluid (primarily for investigating hindgut fermentation in monogastrics), effluent from simulated rumen fermentation (e.g., 'Rusitec', which was as variable as rumen fluid), faeces, and frozen or freeze-dried rumen fluid (which were both less active than fresh rumen fluid). Use of mixtures of cell-free enzymes, or pure cultures of bacteria, may be a way of increasing GPP reproducibility, while reducing reliance on surgically modified animals. However, more research is required to develop these inocula. A number of media have been developed which buffer the incubation and provide relevant micro-nutrients to the microorganisms. To date, little research has been completed on relationships between the composition of the medium and measured GPP. However, comparing GPP from media either rich in N or N-free, allows assessment of contributions of N containing compounds in the sample. (c) 2005 Published by Elsevier B.V.
Resumo:
The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.
Resumo:
Accumulating data suggest that diets rich in flavanols and procyanidins are beneficial for human health. In this context, there has been a great interest in elucidating the systemic levels and metabolic profiles at which these compounds occur in humans. While recent progress has been made, there still exist considerable differences and various disagreements with regard to the mammalian metabolites of these compounds, which in turn is largely a consequence of the lack of availability of authentic standards that would allow for the directed development and validation of expedient analytical methodologies. In the present study, we developed a method for the analysis of structurally-related flavanol metabolites using a wide range of authentic standards. Applying this method in the context of a human dietary intervention study using comprehensively characterized and standardized flavanol- and procyanidin-containing cocoa, we were able to identify the structurally-related (−)-epicatechin metabolites (SREM) postprandially extant in the systemic circulation of humans. Our results demonstrate that (−)-epicatechin-3′-β-D-glucuronide, (−)-epicatechin-3′-sulfate, and a 3′-O-methyl(−)-epicatechin-5/7-sulfate are the predominant SREM in humans, and further confirm the relevance of the stereochemical configuration in the context of flavanol metabolism. In addition, we also identified plausible causes for the previously reported discrepancies regarding flavanol metabolism, consisting to a significant extent of inter-laboratory differences in sample preparation (enzymatic treatment and sample conditioning for HPLC analysis) and detection systems. Thus, these findings may also aid in the establishment of consensus on this topic.
Resumo:
Simulation models are widely employed to make probability forecasts of future conditions on seasonal to annual lead times. Added value in such forecasts is reflected in the information they add, either to purely empirical statistical models or to simpler simulation models. An evaluation of seasonal probability forecasts from the Development of a European Multimodel Ensemble system for seasonal to inTERannual prediction (DEMETER) and ENSEMBLES multi-model ensemble experiments is presented. Two particular regions are considered: Nino3.4 in the Pacific and the Main Development Region in the Atlantic; these regions were chosen before any spatial distribution of skill was examined. The ENSEMBLES models are found to have skill against the climatological distribution on seasonal time-scales. For models in ENSEMBLES that have a clearly defined predecessor model in DEMETER, the improvement from DEMETER to ENSEMBLES is discussed. Due to the long lead times of the forecasts and the evolution of observation technology, the forecast-outcome archive for seasonal forecast evaluation is small; arguably, evaluation data for seasonal forecasting will always be precious. Issues of information contamination from in-sample evaluation are discussed and impacts (both positive and negative) of variations in cross-validation protocol are demonstrated. Other difficulties due to the small forecast-outcome archive are identified. The claim that the multi-model ensemble provides a ‘better’ probability forecast than the best single model is examined and challenged. Significant forecast information beyond the climatological distribution is also demonstrated in a persistence probability forecast. The ENSEMBLES probability forecasts add significantly more information to empirical probability forecasts on seasonal time-scales than on decadal scales. Current operational forecasts might be enhanced by melding information from both simulation models and empirical models. Simulation models based on physical principles are sometimes expected, in principle, to outperform empirical models; direct comparison of their forecast skill provides information on progress toward that goal.
Resumo:
This paper proposes and tests a new framework for weighting recursive out-of-sample prediction errors according to their corresponding levels of in-sample estimation uncertainty. In essence, we show how to use the maximum possible amount of information from the sample in the evaluation of the prediction accuracy, by commencing the forecasts at the earliest opportunity and weighting the prediction errors. Via a Monte Carlo study, we demonstrate that the proposed framework selects the correct model from a set of candidate models considerably more often than the existing standard approach when only a small sample is available. We also show that the proposed weighting approaches result in tests of equal predictive accuracy that have much better sizes than the standard approach. An application to an exchange rate dataset highlights relevant differences in the results of tests of predictive accuracy based on the standard approach versus the framework proposed in this paper.
Resumo:
Model-based estimates of future uncertainty are generally based on the in-sample fit of the model, as when Box-Jenkins prediction intervals are calculated. However, this approach will generate biased uncertainty estimates in real time when there are data revisions. A simple remedy is suggested, and used to generate more accurate prediction intervals for 25 macroeconomic variables, in line with the theory. A simulation study based on an empirically-estimated model of data revisions for US output growth is used to investigate small-sample properties.
Resumo:
The jackknife method is often used for variance estimation in sample surveys but has only been developed for a limited class of sampling designs.We propose a jackknife variance estimator which is defined for any without-replacement unequal probability sampling design. We demonstrate design consistency of this estimator for a broad class of point estimators. A Monte Carlo study shows how the proposed estimator may improve on existing estimators.
Resumo:
Imputation is commonly used to compensate for item non-response in sample surveys. If we treat the imputed values as if they are true values, and then compute the variance estimates by using standard methods, such as the jackknife, we can seriously underestimate the true variances. We propose a modified jackknife variance estimator which is defined for any without-replacement unequal probability sampling design in the presence of imputation and non-negligible sampling fraction. Mean, ratio and random-imputation methods will be considered. The practical advantage of the method proposed is its breadth of applicability.
Resumo:
There is widespread evidence that the volatility of stock returns displays an asymmetric response to good and bad news. This article considers the impact of asymmetry on time-varying hedges for financial futures. An asymmetric model that allows forecasts of cash and futures return volatility to respond differently to positive and negative return innovations gives superior in-sample hedging performance. However, the simpler symmetric model is not inferior in a hold-out sample. A method for evaluating the models in a modern risk-management framework is presented, highlighting the importance of allowing optimal hedge ratios to be both time-varying and asymmetric.
Resumo:
This paper investigates the frequency of extreme events for three LIFFE futures contracts for the calculation of minimum capital risk requirements (MCRRs). We propose a semiparametric approach where the tails are modelled by the Generalized Pareto Distribution and smaller risks are captured by the empirical distribution function. We compare the capital requirements form this approach with those calculated from the unconditional density and from a conditional density - a GARCH(1,1) model. Our primary finding is that both in-sample and for a hold-out sample, our extreme value approach yields superior results than either of the other two models which do not explicitly model the tails of the return distribution. Since the use of these internal models will be permitted under the EC-CAD II, they could be widely adopted in the near future for determining capital adequacies. Hence, close scrutiny of competing models is required to avoid a potentially costly misallocation capital resources while at the same time ensuring the safety of the financial system.
Resumo:
The use of MPT in the construction real estate portfolios has two serious limitations when used in an ex-ante framework: (1) the intertemporal instability of the portfolio weights and (2) the sharp deterioration in performance of the optimal portfolios outside the sample period used to estimate asset mean returns. Both problems can be traced to wide fluctuations in sample means Jorion (1985). Thus the use of a procedure that ignores the estimation risk due to the uncertain in mean returns is likely to produce sub-optimal results in subsequent periods. This suggests that the consideration of the issue of estimation risk is crucial in the use of MPT in developing a successful real estate portfolio strategy. Therefore, following Eun & Resnick (1988), this study extends previous ex-ante based studies by evaluating optimal portfolio allocations in subsequent test periods by using methods that have been proposed to reduce the effect of measurement error on optimal portfolio allocations.
Resumo:
The idea of incorporating multiple models of linear rheology into a superensemble, to forge a consensus forecast from the individual model predictions, is investigated. The relative importance of the individual models in the so-called multimodel superensemble (MMSE) was inferred by evaluating their performance on a set of experimental training data, via nonlinear regression. The predictive ability of the MMSE model was tested by comparing its predictions on test data that were similar (in-sample) and dissimilar (out-of-sample) to the training data used in the calibration. For the in-sample forecasts, we found that the MMSE model easily outperformed the best constituent model. The presence of good individual models greatly enhanced the MMSE forecast, while the presence of some bad models in the superensemble also improved the MMSE forecast modestly. While the performance of the MMSE model on the out-of-sample training data was not as spectacular, it demonstrated the robustness of this approach.
Resumo:
We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies, such as intercept corrections or differencing, when location shifts occur as in the recent financial crisis.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions