856 resultados para C33 - Models with Panel Data
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.
Resumo:
In this paper it is demonstrated how the Bayesian parametric bootstrap can be adapted to models with intractable likelihoods. The approach is most appealing when the semi-automatic approximate Bayesian computation (ABC) summary statistics are selected. After a pilot run of ABC, the likelihood-free parametric bootstrap approach requires very few model simulations to produce an approximate posterior, which can be a useful approximation in its own right. An alternative is to use this approximation as a proposal distribution in ABC algorithms to make them more efficient. In this paper, the parametric bootstrap approximation is used to form the initial importance distribution for the sequential Monte Carlo and the ABC importance and rejection sampling algorithms. The new approach is illustrated through a simulation study of the univariate g-and- k quantile distribution, and is used to infer parameter values of a stochastic model describing expanding melanoma cell colonies.
Resumo:
We present relativistic, classical particle models that possess Poincaré invariance, invariant world lines, particle interaction, and separability.
Resumo:
Purpose – Business models to date have remained the creation of management, however, it is the belief of the authors that designers should be critically approaching, challenging and creating new business models as part of their practice. This belief portrays a new era where business model constructs become the new design brief of the future and fuel design and innovation to work together at the strategic level of an organisation. Design/methodology/approach – The purpose of this paper is to explore and investigate business model design. The research followed a deductive structured qualitative content analysis approach utilizing a predetermined categorization matrix. The analysis of forty business cases uncovered commonalities of key strategic drivers behind these innovative business models. Findings – Five business model typologies were derived from this content analysis, from which quick prototypes of new business models can be created. Research limitations/implications – Implications from this research suggest there is no “one right” model, but rather through experimentation, the generation of many unique and diverse concepts can result in greater possibilities for future innovation and sustained competitive advantage. Originality/value – This paper builds upon the emerging research and exploration into the importance and relevance of dynamic, design-driven approaches to the creation of innovative business models. These models aim to synthesize knowledge gained from real world examples into a tangible, accessible and provoking framework that provide new prototyping templates to aid the process of business model experimentation.
Resumo:
Robust estimation often relies on a dispersion function that is more slowly varying at large values than the square function. However, the choice of tuning constant in dispersion functions may impact the estimation efficiency to a great extent. For a given family of dispersion functions such as the Huber family, we suggest obtaining the "best" tuning constant from the data so that the asymptotic efficiency is maximized. This data-driven approach can automatically adjust the value of the tuning constant to provide the necessary resistance against outliers. Simulation studies show that substantial efficiency can be gained by this data-dependent approach compared with the traditional approach in which the tuning constant is fixed. We briefly illustrate the proposed method using two datasets.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.
Resumo:
Farming systems frameworks such as the Agricultural Production Systems simulator (APSIM) represent fluxes through the soil, plant and atmosphere of the system well, but do not generally consider the biotic constraints that function within the system. We designed a method that allowed population models built in DYMEX to interact with APSIM. The simulator engine component of the DYMEX population-modelling platform was wrapped within an APSIM module allowing it to get and set variable values in other APSIM models running in the simulation. A rust model developed in DYMEX is used to demonstrate how the developing rust population reduces the crop's green leaf area. The success of the linking process is seen in the interaction of the two models and how changes in rust population on the crop's leaves feedback to the APSIM crop modifying the growth and development of the crop's leaf area. This linking of population models to simulate pest populations and biophysical models to simulate crop growth and development increases the complexity of the simulation, but provides a tool to investigate biotic constraints within farming systems and further moves APSIM towards being an agro-ecological framework.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.
Resumo:
Inspired by the exact solution of the Majumdar-Ghosh model, a family of one-dimensional, translationally invariant spin Hamiltonians is constructed. The exchange coupling in these models is antiferromagnetic, and decreases linearly with the separation between the spins. The coupling becomes identically zero beyond a certain distance. It is rigorously proved that the dimer configuration is an exact, superstable ground-state configuration of all the members of the family on a periodic chain. The ground state is twofold degenerate, and there exists an energy gap above the ground state. The Majumdar-Ghosh Hamiltonian with a twofold degenerate dimer ground state is just the first member of the family. The scheme of construction is generalized to two and three dimensions, and illustrated with the help of some concrete examples. The first member in two dimensions is the Shastry-Sutherland model. Many of these models have exponentially degenerate, exact dimer ground states.
Resumo:
We analyze the performance of an SIR based admission control strategy in cellular CDMA systems with both voice and data traffic. Most studies In the current literature to estimate CDMA system capacity with both voice and data traf-Bc do not take signal-tlFlnterference ratio (SIR) based admission control into account In this paper, we present an analytical approach to evaluate the outage probability for voice trafllc, the average system throughput and the mean delay for data traffic for a volce/data CDMA system which employs an SIR based admission controL We show that for a dataaniy system, an improvement of about 25% In both the Erlang capacity as well as the mean delay performance is achieved with an SIR based admission control as compared to code availability based admission control. For a mixed voice/data srtem with 10 Erlangs of voice traffic, the Lmprovement in the mean delay performance for data Is about 40%.Ah, for a mean delay of 50 ms with 10 Erlangs voice traffic, the data Erlang capacity improves by about 9%.
Resumo:
Solar dynamo models based on differential rotation inferred from helioseismology tend to produce rather strong magnetic activity at high solar latitudes, in contrast to the observed fact that sunspots appear at low latitudes. We show that a meridional circulation penetrating below the tachocline can solve this problem.
Resumo:
Recently it has been shown that the fidelity of the ground state of a quantum many-body system can be used todetect its quantum critical points (QCPs). If g denotes the parameter in the Hamiltonian with respect to which the fidelity is computed, we find that for one-dimensional models with large but finite size, the fidelity susceptibility chi(F) can detect a QCP provided that the correlation length exponent satisfies nu < 2. We then show that chi(F) can be used to locate a QCP even if nu >= 2 if we introduce boundary conditions labeled by a twist angle N theta, where N is the system size. If the QCP lies at g = 0, we find that if N is kept constant, chi(F) has a scaling form given by chi(F) similar to theta(-2/nu) f (g/theta(1/nu)) if theta << 2 pi/N. We illustrate this both in a tight-binding model of fermions with a spatially varying chemical potential with amplitude h and period 2q in which nu = q, and in a XY spin-1/2 chain in which nu = 2. Finally we show that when q is very large, the model has two additional QCPs at h = +/- 2 which cannot be detected by studying the energy spectrum but are clearly detected by chi(F). The peak value and width of chi(F) seem to scale as nontrivial powers of q at these QCPs. We argue that these QCPs mark a transition between extended and localized states at the Fermi energy. DOI: 10.1103/PhysRevB.86.245424