90 resultados para Conditional Heteroskedasticity
Resumo:
This paper considers the effect of GARCH errors on the tests proposed byPerron (1997) for a unit root in the presence of a structural break. We assessthe impact of degeneracy and integratedness of the conditional varianceindividually and find that, apart from in the limit, the testing procedure isinsensitive to the degree of degeneracy but does exhibit an increasingover-sizing as the process becomes more integrated. When we consider the GARCHspecifications that we are likely to encounter in empirical research, we findthat the Perron tests are reasonably robust to the presence of GARCH and donot suffer from severe over-or under-rejection of a correct null hypothesis.
Resumo:
Given a nonlinear model, a probabilistic forecast may be obtained by Monte Carlo simulations. At a given forecast horizon, Monte Carlo simulations yield sets of discrete forecasts, which can be converted to density forecasts. The resulting density forecasts will inevitably be downgraded by model mis-specification. In order to enhance the quality of the density forecasts, one can mix them with the unconditional density. This paper examines the value of combining conditional density forecasts with the unconditional density. The findings have positive implications for issuing early warnings in different disciplines including economics and meteorology, but UK inflation forecasts are considered as an example.
Resumo:
With the increasing frequency and magnitude of warmer days during the summer in the UK, bedding plants which were a traditional part of the urban green landscape are perceived as unsustainable and water-demanding. During recent summers when bans on irrigation have been imposed, use and sales of bedding plants have dropped dramatically having a negative financial impact on the nursery industry. Retaining bedding species as a feature in public and even private spaces in future may be conditional on them being managed in a manner that minimises their water use. Using Petunia x hybrida ‘Hurrah White’ we aimed to discover which irrigation approach was the most efficient for maintaining plants’ ornamental quality (flower numbers, size and longevity), shoot and root growth under water deficit and periods of complete water withdrawal. Plants were grown from plugs for 51 days in wooden rhizotrons (0.35 m (h) x 0.1 m (w) x 0.065 m (d)); the rhizotrons’ front comprised clear Perspex which enabled us to monitor root growth closely. Irrigation treatments were: 1. watering with the amount which constitutes 50% of container capacity by conventional surface drip-irrigation (‘50% TOP’); 2. 50% as sub-irrigation at 10 cm depth (‘50% SUB’); 3. ‘split’ irrigation: 25% as surface drip- and 25% as sub-irrigation at 15 cm depth (‘25/25 SPLIT’); 4. 25% as conventional surface drip-irrigation (‘25% TOP’). Plants were irrigated daily at 18:00 apart from days 34-36 (inclusive) when water was withdrawn for all the treatments. Plants in ‘50% SUB’ had the most flowers and their size was comparable to that of ‘50% TOP’. Differences between treatments in other ‘quality’ parameters (height, shoot number) were biologically small. There was less root growth at deeper soil surface levels for ‘50% TOP’ which indicated that irrigation methods like ‘50% SUB’ and ‘25/25 SPLIT’ and stronger water deficits encouraged deeper root growth. It is suggested that sub-irrigation at 10 cm depth with water amounts of 50% container capacity would result in the most root growth with the maximum flowering for Petunia. Leaf stomatal conductance appeared to be most sensitive to the changes in substrate moisture content in the deepest part of the soil profile, where most roots were situated.
Resumo:
This study uses a bootstrap methodology to explicitly distinguish between skill and luck for 80 Real Estate Investment Trust Mutual Funds in the period January 1995 to May 2008. The methodology successfully captures non-normality in the idiosyncratic risk of the funds. Using unconditional, beta conditional and alpha-beta conditional estimation models, the results indicate that all but one fund demonstrates poor skill. Tests of robustness show that this finding is largely invariant to REIT market conditions and maturity.
Resumo:
The issue of whether Real Estate Investment Trusts should pursue a focused or diversified investment strategy remains an ongoing debate within both the academic and industry communities. This paper considers the relationship between REITs focused on different property sectors in a GARCH-DCC framework. The daily conditional correlations reveal that since 1990 there has been a marked upward trend in the coefficients between US REIT sub-sectors. The findings imply that REITs are behaving in a far more homogeneous manner than in the past. Furthermore, the argument that REITs should be focused in order that investors can make the diversification decision is reduced.
Resumo:
This paper studies the effects of increasing formality via tax reduction and simplification schemes on micro-firm performance. It uses the 1997 Brazilian SIMPLES program. We develop a simple theoretical model to show that SIMPLES has an impact only on a segment of the micro-firm population, for which the effect of formality on firm performance can be identified, and that can be analyzed along the single dimensional quantiles of the conditional firm revenues. To estimate the effect of formality, we use an econometric approach that compares eligible and non-eligible firms, born before and after SIMPLES in a local interval about the introduction of SIMPLES. We use an estimator that combines both quantile regression and the regression discontinuity identification strategy. The empirical results corroborate the positive effect of formality on microfirms' performance and produce a clear characterization of who benefits from these programs.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
The issue of whether Real Estate Investment Trusts (REITs) should pursue a focused or diversified investment strategy remains an ongoing debate within both the academic and industry communities. This article considers the relationship between REITs focused on different property sectors in a Generalized Autoregressive Conditional Heteroscedasticity-Dynamic Control Correlation (GARCH-DCC) framework. The daily conditional correlations reveal that since 1990 there has been a marked upward trend in the coefficients between US REIT sub-sectors. The findings imply that REITs are behaving in a far more homogeneous manner than in the past. Furthermore, the argument that REITs should be focused in order that investors can make the diversification decision is reduced.
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
Radical contextualists have observed that the content of what is said by the utterance of a sentence is shaped in far-reaching ways by the context of utterance. And they have argued that the ways in which the content of what is said is shaped by context cannot be explained by semantic theory. A striking number of the examples that radical contextualists use to support their view involve sentences containing color adjectives (“red”, “green”, etc.). In this paper, I show how the most sophisticated analysis of color adjectives within the explanatory framework of compositional truth conditional semantics—recently developed by Kennedy and McNally (Synthese 174(1):79–98 2010)—needs to be modified to handle the full range of contextual variation displayed by color adjectives.
Resumo:
This paper revisits the debate over the importance of absolute vs. relative income as a correlate of subjective well-being using data from Bangladesh, one of the poorest countries in the world with high levels of corruption and poor governance. We do so by combining household data with population census and village survey records. Our results show that conditional on own household income, respondents report higher satisfaction levels when they experience an increase in their income over the past years. More importantly, individuals who report their income to be lower than their neighbours in the village also report less satisfaction with life. At the same time, our evidence suggests that relative wealth effect is stronger for the rich. Similarly, in villages with higher inequality, individuals report less satisfaction with life. However, when compared to the effect of absolute income, these effects (i.e. relative income and local inequality) are modest. Amongst other factors, we study the influence of institutional quality. Institutional quality, measured in terms of confidence in police, matters for well-being: it enters with a positive and significant coefficient in the well-being function.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
Several methods are examined which allow to produce forecasts for time series in the form of probability assignments. The necessary concepts are presented, addressing questions such as how to assess the performance of a probabilistic forecast. A particular class of models, cluster weighted models (CWMs), is given particular attention. CWMs, originally proposed for deterministic forecasts, can be employed for probabilistic forecasting with little modification. Two examples are presented. The first involves estimating the state of (numerically simulated) dynamical systems from noise corrupted measurements, a problem also known as filtering. There is an optimal solution to this problem, called the optimal filter, to which the considered time series models are compared. (The optimal filter requires the dynamical equations to be known.) In the second example, we aim at forecasting the chaotic oscillations of an experimental bronze spring system. Both examples demonstrate that the considered time series models, and especially the CWMs, provide useful probabilistic information about the underlying dynamical relations. In particular, they provide more than just an approximation to the conditional mean.
Resumo:
Seamless phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages, with stage 1 used to answer phase II objectives such as treatment selection and stage 2 used for the confirmatory analysis, which is a phase III objective. Although seamless phase II/III clinical trials are efficient because the confirmatory analysis includes phase II data from stage 1, inference can pose statistical challenges. In this paper, we consider point estimation following seamless phase II/III clinical trials in which stage 1 is used to select the most effective experimental treatment and to decide if, compared with a control, the trial should stop at stage 1 for futility. If the trial is not stopped, then the phase III confirmatory part of the trial involves evaluation of the selected most effective experimental treatment and the control. We have developed two new estimators for the treatment difference between these two treatments with the aim of reducing bias conditional on the treatment selection made and on the fact that the trial continues to stage 2. We have demonstrated the properties of these estimators using simulations
Resumo:
We discuss the challenge to truth-conditional semantics presented by apparent shifts in extension of predicates such as ‘red’. We propose an explicit indexical semantics for ‘red’ and argue that our account is preferable to the alternatives on conceptual and empirical grounds.