18 resultados para Variance Models
em CentAUR: Central Archive University of Reading - UK
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.
Phosphorus dynamics and export in streams draining micro-catchments: Development of empirical models
Resumo:
Annual total phosphorus (TP) export data from 108 European micro-catchments were analyzed against descriptive catchment data on climate (runoff), soil types, catchment size, and land use. The best possible empirical model developed included runoff, proportion of agricultural land and catchment size as explanatory variables but with a low explanation of the variance in the dataset (R-2 = 0.37). Improved country specific empirical models could be developed in some cases. The best example was from Norway where an analysis of TP-export data from 12 predominantly agricultural micro-catchments revealed a relationship explaining 96% of the variance in TP-export. The explanatory variables were in this case soil-P status (P-AL), proportion of organic soil, and the export of suspended sediment. Another example is from Denmark where an empirical model was established for the basic annual average TP-export from 24 catchments with percentage sandy soils, percentage organic soils, runoff, and application of phosphorus in fertilizer and animal manure as explanatory variables (R-2 = 0.97).
Resumo:
Combinations of drugs are increasingly being used for a wide variety of diseases and conditions. A pre-clinical study may allow the investigation of the response at a large number of dose combinations. In determining the response to a drug combination, interest may lie in seeking evidence of synergism, in which the joint action is greater than the actions of the individual drugs, or of antagonism, in which it is less. Two well-known response surface models representing no interaction are Loewe additivity and Bliss independence, and Loewe or Bliss synergism or antagonism is defined relative to these. We illustrate an approach to fitting these models for the case in which the marginal single drug dose-response relationships are represented by four-parameter logistic curves with common upper and lower limits, and where the response variable is normally distributed with a common variance about the dose-response curve. When the dose-response curves are not parallel, the relative potency of the two drugs varies according to the magnitude of the desired effect and the models for Loewe additivity and synergism/antagonism cannot be explicitly expressed. We present an iterative approach to fitting these models without the assumption of parallel dose-response curves. A goodness-of-fit test based on residuals is also described. Implementation using the SAS NLIN procedure is illustrated using data from a pre-clinical study. Copyright © 2007 John Wiley & Sons, Ltd.
Resumo:
Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.
Resumo:
A polynomial-based ARMA model, when posed in a state-space framework can be regarded in many different ways. In this paper two particular state-space forms of the ARMA model are considered, and although both are canonical in structure they differ in respect of the mode in which disturbances are fed into the state and output equations. For both forms a solution is found to the optimal discrete-time observer problem and algebraic connections between the two optimal observers are shown. The purpose of the paper is to highlight the fact that the optimal observer obtained from the first state-space form, commonly known as the innovations form, is not that employed in an optimal controller, in the minimum-output variance sense, whereas the optimal observer obtained from the second form is. Hence the second form is a much more appropriate state-space description to use for controller design, particularly when employed in self-tuning control schemes.
Resumo:
The transport of the Antarctic Circumpolar Current (ACC) varies strongly across the coupled GCMs (general circulation models) used for the IPCC AR4. This note shows that a large fraction of this across-model variance can be explained by relating it to the parameterization of eddy-induced transports. In the majority of models this parameterization is based on the study by Gent and McWilliams (1990). The main parameter is the quasi-Stokes diffusivity kappa (often referred to less accurately as ’’thickness diffusion’’). The ACC transport and the meridional density gradient both correlate strongly with kappa across those models where kappa is a prescribed constant. In contrast, there is no correlation with the isopycnal diffusivity jiso across the models. The sensitivity of the ACC transport to kappa is larger than to the zonal wind stress maximum. Experiments with the fast GCM FAMOUS show that changing kappa directly affects the ACC transport by changing the density structure throughout the water column. Our results suggest that this limits the role of the wind stress magnitude in setting the ACC transport in FAMOUS. The sensitivities of the ACC and the meridional density gradient are very similar across the AR4 GCMs (for those models where kappa is a prescribed constant) and among the FAMOUS experiments. The strong sensitivity of the ACC transport to kappa needs careful assessment in climate models.
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
We study the empirical performance of the classical minimum-variance hedging strategy, comparing several econometric models for estimating hedge ratios of crude oil, gasoline and heating oil crack spreads. Given the great variability and large jumps in both spot and futures prices, considerable care is required when processing the relevant data and accounting for the costs of maintaining and re-balancing the hedge position. We find that the variance reduction produced by all models is statistically and economically indistinguishable from the one-for-one “naïve” hedge. However, minimum-variance hedging models, especially those based on GARCH, generate much greater margin and transaction costs than the naïve hedge. Therefore we encourage hedgers to use a naïve hedging strategy on the crack spread bundles now offered by the exchange; this strategy is the cheapest and easiest to implement. Our conclusion contradicts the majority of the existing literature, which favours the implementation of GARCH-based hedging strategies.
Resumo:
The internal variability and coupling between the stratosphere and troposphere in CCMVal‐2 chemistry‐climate models are evaluated through analysis of the annular mode patterns of variability. Computation of the annular modes in long data sets with secular trends requires refinement of the standard definition of the annular mode, and a more robust procedure that allows for slowly varying trends is established and verified. The spatial and temporal structure of the models’ annular modes is then compared with that of reanalyses. As a whole, the models capture the key features of observed intraseasonal variability, including the sharp vertical gradients in structure between stratosphere and troposphere, the asymmetries in the seasonal cycle between the Northern and Southern hemispheres, and the coupling between the polar stratospheric vortices and tropospheric midlatitude jets. It is also found that the annular mode variability changes little in time throughout simulations of the 21st century. There are, however, both common biases and significant differences in performance in the models. In the troposphere, the annular mode in models is generally too persistent, particularly in the Southern Hemisphere summer, a bias similar to that found in CMIP3 coupled climate models. In the stratosphere, the periods of peak variance and coupling with the troposphere are delayed by about a month in both hemispheres. The relationship between increased variability of the stratosphere and increased persistence in the troposphere suggests that some tropospheric biases may be related to stratospheric biases and that a well‐simulated stratosphere can improve simulation of tropospheric intraseasonal variability.
Resumo:
Ensembles of extended Atmospheric Model Intercomparison Project (AMIP) runs from the general circulation models of the National Centers for Environmental Prediction (formerly the National Meteorological Center) and the Max-Planck Institute (Hamburg, Germany) are used to estimate the potential predictability (PP) of an index of the Pacific–North America (PNA) mode of climate change. The PP of this pattern in “perfect” prediction experiments is 20%–25% of the index’s variance. The models, particularly that from MPI, capture virtually all of this variance in their hindcasts of the winter PNA for the period 1970–93. The high levels of internally generated model noise in the PNA simulations reconfirm the need for an ensemble averaging approach to climate prediction. This means that the forecasts ought to be expressed in a probabilistic manner. It is shown that the models’ skills are higher by about 50% during strong SST events in the tropical Pacific, so the probabilistic forecasts need to be conditional on the tropical SST. Taken together with earlier studies, the present results suggest that the original set of AMIP integrations (single 10-yr runs) is not adequate to reliably test the participating models’ simulations of interannual climate variability in the midlatitudes.
Resumo:
This paper aims to understand the physical processes causing the large spread in the storm track projections of the CMIP5 climate models. In particular, the relationship between the climate change responses of the storm tracks, as measured by the 2–6 day mean sea level pressure variance, and the equator-to-pole temperature differences at upper- and lower-tropospheric levels is investigated. In the southern hemisphere the responses of the upper- and lower-tropospheric temperature differences are correlated across the models and as a result they share similar associations with the storm track responses. There are large regions in which the storm track responses are correlated with the temperature difference responses, and a simple linear regression model based on the temperature differences at either level captures the spatial pattern of the mean storm track response as well explaining between 30 and 60 % of the inter-model variance of the storm track responses. In the northern hemisphere the responses of the two temperature differences are not significantly correlated and their associations with the storm track responses are more complicated. In summer, the responses of the lower-tropospheric temperature differences dominate the inter-model spread of the storm track responses. In winter, the responses of the upper- and lower-temperature differences both play a role. The results suggest that there is potential to reduce the spread in storm track responses by constraining the relative magnitudes of the warming in the tropical and polar regions.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.