53 resultados para out-of-sample forecast
Resumo:
When performing data fusion, one often measures where targets were and then wishes to deduce where targets currently are. There has been recent research on the processing of such out-of-sequence data. This research has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships among the algorithms so that any approximations made are explicit. Results for a multi-sensor scenario involving out-of-sequence data association are used to illustrate the utility of this approach in a specific context.
Resumo:
In data fusion systems, one often encounters measurements of past target locations and then wishes to deduce where the targets are currently located. Recent research on the processing of such out-of-sequence data has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships between the algorithms so that any approximations made are explicit.
Resumo:
Using NCANDS data of US child maltreatment reports for 2009, logistic regression, probit analysis, discriminant analysis and an artificial neural network are used to determine the factors which explain the decision to place a child in out-of-home care. As well as developing a new model for 2009, a previous study using 2005 data is replicated. While there are many small differences, the four estimation techniques give broadly the same results, demonstrating the robustness of the results. Similarly, apart from age and sexual abuse, the 2005 and 2009 results are roughly similar. For 2009, child characteristics (particularly child emotional problems) are more important than the nature of the abuse and the situation of the household; while caregiver characteristics are the least important. All these models have low explanatory power.
Resumo:
The epoxide ring in 5,6-dihydro-5,6-epoxy-1,10-phenanthroline (L) opens up in its reaction with 4-methylaniline and 4-methoxyaniline in water in equimolar proportion at room temperature without any Lewis acid catalyst to give a monohydrate of 6-(4-methyl-phenylamino)-5,6-dihydro-1,10-phenanthrolin-5-ol (L′·H2O) and 6-(4-methoxyphenyl-amino)-5,6-dihydro-1,10-phenanthrolin-5-ol (L″) respectively. Reaction time decreases from 72 to 14 h in boiling water. But the yields become less. Reaction of L with Zn(ClO4)2·6H2O in methanol in 3:1 molar ratio at room temperature affords white [ZnL3](ClO4)2·H2O. The X-ray crystal structure of the acetonitrile solvate [ZnL3](ClO4)2·MeCN has been determined which shows that the metal has a distorted octahedral N6 coordination sphere. [ZnL3](ClO4)2·2H2O reacts with 4-methylaniline and 4-methoxyaniline in boiling water in 1:3 molar proportion in the absence of any Lewis acid catalyst to produce [ZnL′3](ClO4)2·4H2O and [ZnL″3](ClO4)2·H2O, respectively in 1–4 h time in somewhat low yield. In the 1H NMR spectra of [ZnL′3](ClO4)2·4H2O and [ZnL″3](ClO4)2·H2O, only one sharp methyl signal is observed implicating that only one diastereomer out of the 23 possibilities is formed. The same diastereomers are obtained when L′·H2O and L″ are reacted directly with Zn(ClO4)2·6H2O in tetrahydrofuran at room temperature in very good yields. Reactions of L′·H2O and L″ with Ru(phen)2Cl2·2H2O (phen = 1,10-phenanthroline) in equimolar proportion in methanol–water mixture under refluxing condition lead to the isolation of two diastereomers of [Ru(phen)2L′](ClO4)2·2H2O and [Ru(phen)2L″](ClO4)2·2H2O.
Resumo:
Since the Dearing Report .1 there has been an increased emphasis on the development of employability and transferable (‘soft’) skills in undergraduate programmes. Within STEM subject areas, recent reports concluded that universities should offer ‘greater and more sustainable variety in modes of study to meet the changing demands of industry and students’.2 At the same time, higher education (HE) institutions are increasingly conscious of the sensitivity of league table positions on employment statistics and graduate destinations. Modules that are either credit or non-credit bearing are finding their way into the core curriculum at HE. While the UK government and other educational bodies argue the way forward over A-level reform, universities must also meet the needs of their first year cohorts in terms of the secondary to tertiary transition and developing independence in learning.
Resumo:
We test whether there are nonlinearities in the response of short- and long-term interest rates to the spread in interest rates, and assess the out-of-sample predictability of interest rates using linear and nonlinear models. We find strong evidence of nonlinearities in the response of interest rates to the spread. Nonlinearities are shown to result in more accurate short-horizon forecasts, especially of the spread.
Resumo:
This paper demonstrates that the use of GARCH-type models for the calculation of minimum capital risk requirements (MCRRs) may lead to the production of inaccurate and therefore inefficient capital requirements. We show that this inaccuracy stems from the fact that GARCH models typically overstate the degree of persistence in return volatility. A simple modification to the model is found to improve the accuracy of MCRR estimates in both back- and out-of-sample tests. Given that internal risk management models are currently in widespread usage in some parts of the world (most notably the USA), and will soon be permitted for EC banks and investment firms, we believe that our paper should serve as a valuable caution to risk management practitioners who are using, or intend to use this popular class of models.
Resumo:
This paper examines the predictability of real estate asset returns using a number of time series techniques. A vector autoregressive model, which incorporates financial spreads, is able to improve upon the out of sample forecasting performance of univariate time series models at a short forecasting horizon. However, as the forecasting horizon increases, the explanatory power of such models is reduced, so that returns on real estate assets are best forecast using the long term mean of the series. In the case of indirect property returns, such short-term forecasts can be turned into a trading rule that can generate excess returns over a buy-and-hold strategy gross of transactions costs, although none of the trading rules developed could cover the associated transactions costs. It is therefore concluded that such forecastability is entirely consistent with stock market efficiency.
Resumo:
This paper investigates the effect on balance of a number of Schur product-type localization schemes which have been designed with the primary function of reducing spurious far-field correlations in forecast error statistics. The localization schemes studied comprise a non-adaptive scheme (where the moderation matrix is decomposed in a spectral basis), and two adaptive schemes, namely a simplified version of SENCORP (Smoothed ENsemble COrrelations Raised to a Power) and ECO-RAP (Ensemble COrrelations Raised to A Power). The paper shows, we believe for the first time, how the degree of balance (geostrophic and hydrostatic) implied by the error covariance matrices localized by these schemes can be diagnosed. Here it is considered that an effective localization scheme is one that reduces spurious correlations adequately but also minimizes disruption of balance (where the 'correct' degree of balance or imbalance is assumed to be possessed by the unlocalized ensemble). By varying free parameters that describe each scheme (e.g. the degree of truncation in the schemes that use the spectral basis, the 'order' of each scheme, and the degree of ensemble smoothing), it is found that a particular configuration of the ECO-RAP scheme is best suited to the convective-scale system studied. According to our diagnostics this ECO-RAP configuration still weakens geostrophic and hydrostatic balance, but overall this is less so than for other schemes.
Resumo:
This paper uses a novel numerical optimization technique - robust optimization - that is well suited to solving the asset-liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximise the Sharpe ratio. We disaggregate pension liabilities into three components - active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes-Stein, and Black-Litterman models as well as the actual USS investment decisions. Over a 144 month out-of-sample period robust optimization is superior to the four benchmarks across 20 performance criteria, and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions
Resumo:
Structural differences among models account for much of the uncertainty in projected climate changes, at least until the mid-twenty-first century. Recent observations encompass too limited a range of climate variability to provide a robust test of the ability to simulate climate changes. Past climate changes provide a unique opportunity for out-of-sample evaluation of model performance. Palaeo-evaluation has shown that the large-scale changes seen in twenty-first-century projections, including enhanced land–sea temperature contrast, latitudinal amplification, changes in temperature seasonality and scaling of precipitation with temperature, are likely to be realistic. Although models generally simulate changes in large-scale circulation sufficiently well to shift regional climates in the right direction, they often do not predict the correct magnitude of these changes. Differences in performance are only weakly related to modern-day biases or climate sensitivity, and more sophisticated models are not better at simulating climate changes. Although models correctly capture the broad patterns of climate change, improvements are required to produce reliable regional projections.