2 resultados para Gibbs Sampling
em Aston University Research Archive
Resumo:
We examine financial constraints and forms of finance used for investment, by analysing survey data on 157 large privatised companies in Hungary and Poland for the period 1998 - 2000. The Bayesian analysis using Gibbs sampling is carried out to obtain inferences about the sample companies' access to finance from a model for categorical outcome. By applying alternative measures of financial constraints we find that foreign companies, companies that are part of domestic industrial groups and enterprises with concentrated ownership are all less constrained in their access to finance. Moreover, we identify alternative modes of finance since different corporate control and past performance characteristics influence the sample firms' choice of finance source. In particular, while being industry-specific, the access to domestic credit is positively associated with company size and past profitability. Industrial group members tend to favour bond issues as well as sells-offs of assets as appropriate types of finance for their investment programmes. Preferences for raising finance in the form of equity are associated with share concentration in a non-monotonic way, being most prevalent in those companies where the dominant owner holds 25%-49% of shares. Close links with a leading bank not only increase the possibility of bond issues but also appear to facilitate access to non-banking sources of funds, in particular, to finance supplied by industrial partners. Finally, reliance on state finance is less likely for the companies whose profiles resemble the case of unconstrained finance, namely, for companies with foreign partners, companies that are part of domestic industrial groups and companies with a strategic investor. Model implications also include that the use of state funds is less likely for Polish than for Hungarian companies.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.