46 resultados para C33 - Models with Panel Data

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The consideration of the limit theory in which T is fixed and N is allowed to go to infinity improves the finite-sample properties of the tests and avoids the imposition of the relative rates at which T and N go to infinity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N -> infinity simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro-panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over-rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we test the Prebish-Singer (PS) hypothesis, which states that real commodity prices decline in the long run, using two recent powerful panel data stationarity tests accounting for cross-sectional dependence and a structural break. We find that the hypothesis cannot be rejected for most commodities other than oil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the use of an improved covariate unit root test which exploits the cross-sectional dependence information when the panel data null hypothesis of a unit root is rejected. More explicitly, to increase the power of the test, we suggest the utilization of more than one covariate and offer several ways to select the ‘best’ covariates from the set of potential covariates represented by the individuals in the panel. Employing our methods, we investigate the Prebish-Singer hypothesis for nine commodity prices. Our results show that this hypothesis holds for all but the price of petroleum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the findings from a discrete-choice experiment designed to estimate the economic benefits associated with rural landscape improvements in Ireland. Using a mixed logit model, the panel nature of the dataset is exploited to retrieve willingness-to-pay values for every individual in the sample. This departs from customary approaches in which the willingness-to-pay estimates are normally expressed as measures of central tendency of an a priori distribution. Random-effects models for panel data are subsequently used to identify the determinants of the individual-specific willingness-to-pay estimates. In comparison with the standard methods used to incorporate individual-specific variables into the analysis of discrete-choice experiments, the analytical approach outlined in this paper is shown to add considerable explanatory power to the welfare estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present results for a suite of 14 three-dimensional, high-resolution hydrodynamical simulations of delayed-detonation models of Type Ia supernova (SN Ia) explosions. This model suite comprises the first set of three-dimensional SN Ia simulations with detailed isotopic yield information. As such, it may serve as a data base for Chandrasekhar-mass delayed-detonation model nucleosynthetic yields and for deriving synthetic observables such as spectra and light curves. We employ aphysically motivated, stochastic model based on turbulent velocity fluctuations and fuel density to calculate in situ the deflagration-to-detonation transition probabilities. To obtain different strengths of the deflagration phase and thereby different degrees of pre-expansion, we have chosen a sequence of initial models with 1, 3, 5, 10, 20, 40, 100, 150, 200, 300 and 1600 (two different realizations) ignition kernels in a hydrostatic white dwarf with a central density of 2.9 × 10 g cm, as well as one high central density (5.5 × 10 g cm) and one low central density (1.0 × 10 g cm) rendition of the 100 ignition kernel configuration. For each simulation, we determined detailed nucleosynthetic yields by postprocessing10 tracer particles with a 384 nuclide reaction network. All delayed-detonation models result in explosions unbinding thewhite dwarf, producing a range of 56Ni masses from 0.32 to 1.11M. As a general trend, the models predict that the stableneutron-rich iron-group isotopes are not found at the lowest velocities, but rather at intermediate velocities (~3000×10 000 km s) in a shell surrounding a Ni-rich core. The models further predict relatively low-velocity oxygen and carbon, with typical minimum velocities around 4000 and 10 000 km s, respectively. © 2012 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hidden Markov models (HMMs) are widely used models for sequential data. As with other probabilistic graphical models, they require the specification of precise probability values, which can be too restrictive for some domains, especially when data are scarce or costly to acquire. We present a generalized version of HMMs, whose quantification can be done by sets of, instead of single, probability distributions. Our models have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. Efficient inference algorithms are developed to address standard HMM usage such as the computation of likelihoods and most probable explanations. Experiments with real data show that the use of imprecise probabilities leads to more reliable inferences without compromising efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful innovation depends on knowledge – technological, strategic and market related. In this paper we explore the role and interaction of firms’ existing knowledge stocks and current knowledge flows in shaping innovation success. The paper contributes to our understanding of the determinants of firms’ innovation outputs and provides new information on the relationship between knowledge stocks, as measured by patents, and innovation output indicators. Our analysis uses innovation panel data relating to plants’ internal knowledge creation, external knowledge search and innovation outputs. Firm-level patent data is matched with this plant-level innovation panel data to provide a measure of firms’ knowledge stock. Two substantive conclusions follow. First, existing knowledge stocks have weak negative rather than positive impacts on firms’ innovation outputs, reflecting potential core-rigidities or negative path dependencies rather than the accumulation of competitive advantages. Second, knowledge flows derived from internal investment and external search dominate the effect of existing knowledge stocks on innovation performance. Both results emphasize the importance of firms’ knowledge search strategies. Our results also re-emphasize the potential issues which arise when using patents as a measure of innovation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cross sections for the multi-ionization of He and Li are presented for impact energies in the range of 50 to 1000 keV/amu. These are calculated using the eikonal initial state approximation to represent the input and exit channels of the active electrons. The ionization process is simulated in a variety of ways, most notably an attempt to account for the effects of electron correlation via the inclusion of a continuum density of states (CDS) term. Inadequacies, of the CDW formulation at small impact parameters, and of the models themselves, are discussed and conclusions are drawn on what repercussions this has for the cross sections calculated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surrogate-based-optimization methods provide a means to achieve high-fidelity design optimization at reduced computational cost by using a high-fidelity model in combination with lower-fidelity models that are less expensive to evaluate. This paper presents a provably convergent trust-region model-management methodology for variableparameterization design models: that is, models for which the design parameters are defined over different spaces. Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design optimization problems. Results for a wing design problem and a flapping-flight problem show that the method outperforms direct optimization in the high-fidelity space. On the wing design problem, the new method achieves 76% savings in high-fidelity function calls. On a bat-flight design problem, it achieves approximately 45% time savings, although it converges to a different local minimum than did the benchmark.