898 resultados para Exponential Random Graph Model
Resumo:
This thesis presents Bayesian solutions to inference problems for three types of social network data structures: a single observation of a social network, repeated observations on the same social network, and repeated observations on a social network developing through time. A social network is conceived as being a structure consisting of actors and their social interaction with each other. A common conceptualisation of social networks is to let the actors be represented by nodes in a graph with edges between pairs of nodes that are relationally tied to each other according to some definition. Statistical analysis of social networks is to a large extent concerned with modelling of these relational ties, which lends itself to empirical evaluation. The first paper deals with a family of statistical models for social networks called exponential random graphs that takes various structural features of the network into account. In general, the likelihood functions of exponential random graphs are only known up to a constant of proportionality. A procedure for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods is presented. The algorithm consists of two basic steps, one in which an ordinary Metropolis-Hastings up-dating step is used, and another in which an importance sampling scheme is used to calculate the acceptance probability of the Metropolis-Hastings step. In paper number two a method for modelling reports given by actors (or other informants) on their social interaction with others is investigated in a Bayesian framework. The model contains two basic ingredients: the unknown network structure and functions that link this unknown network structure to the reports given by the actors. These functions take the form of probit link functions. An intrinsic problem is that the model is not identified, meaning that there are combinations of values on the unknown structure and the parameters in the probit link functions that are observationally equivalent. Instead of using restrictions for achieving identification, it is proposed that the different observationally equivalent combinations of parameters and unknown structure be investigated a posteriori. Estimation of parameters is carried out using Gibbs sampling with a switching devise that enables transitions between posterior modal regions. The main goal of the procedures is to provide tools for comparisons of different model specifications. Papers 3 and 4, propose Bayesian methods for longitudinal social networks. The premise of the models investigated is that overall change in social networks occurs as a consequence of sequences of incremental changes. Models for the evolution of social networks using continuos-time Markov chains are meant to capture these dynamics. Paper 3 presents an MCMC algorithm for exploring the posteriors of parameters for such Markov chains. More specifically, the unobserved evolution of the network in-between observations is explicitly modelled thereby avoiding the need to deal with explicit formulas for the transition probabilities. This enables likelihood based parameter inference in a wider class of network evolution models than has been available before. Paper 4 builds on the proposed inference procedure of Paper 3 and demonstrates how to perform model selection for a class of network evolution models.
Resumo:
Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.
Resumo:
This paper considers a stochastic SIR (susceptible-infective-removed) epidemic model in which individuals may make infectious contacts in two ways, both within 'households' (which for ease of exposition are assumed to have equal size) and along the edges of a random graph describing additional social contacts. Heuristically-motivated branching process approximations are described, which lead to a threshold parameter for the model and methods for calculating the probability of a major outbreak, given few initial infectives, and the expected proportion of the population who are ultimately infected by such a major outbreak. These approximate results are shown to be exact as the number of households tends to infinity by proving associated limit theorems. Moreover, simulation studies indicate that these asymptotic results provide good approximations for modestly-sized finite populations. The extension to unequal sized households is discussed briefly.
Resumo:
Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.
Resumo:
We give reasons why demographic parameters such as survival and reproduction rates are often modelled well in stochastic population simulation using beta distributions. In practice, it is frequently expected that these parameters will be correlated, for example with survival rates for all age classes tending to be high or low in the same year. We therefore discuss a method for producing correlated beta random variables by transforming correlated normal random variables, and show how it can be applied in practice by means of a simple example. We also note how the same approach can be used to produce correlated uniform triangular, and exponential random variables. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.
Resumo:
Inspired in dynamic systems theory and Brewer’s contributions to apply it to economics, this paper establishes a bond graph model. Two main variables, a set of inter-connectivities based on nodes and links (bonds) and a fractional order dynamical perspective, prove to be a good macro-economic representation of countries’ potential performance in nowadays globalization. The estimations based on time series for 50 countries throughout the last 50 decades confirm the accuracy of the model and the importance of scale for economic performance.
Resumo:
This project focuses on the study of different explanatory models for the behavior of CDS security, such as Fixed-Effect Model, GLS Random-Effect Model, Pooled OLS and Quantile Regression Model. After determining the best fitness model, trading strategies with long and short positions in CDS have been developed. Due to some specifications of CDS, I conclude that the quantile regression is the most efficient model to estimate the data. The P&L and Sharpe Ratio of the strategy are analyzed using a backtesting analogy, where I conclude that, mainly for non-financial companies, the model allows traders to take advantage of and profit from arbitrages.
Resumo:
Counting labelled planar graphs, and typical properties of random labelled planar graphs, have received much attention recently. We start the process here of extending these investigations to graphs embeddable on any fixed surface S. In particular we show that the labelled graphs embeddable on S have the same growth constant as for planar graphs, and the same holds for unlabelled graphs. Also, if we pick a graph uniformly at random from the graphs embeddable on S which have vertex set {1, . . . , n}, then with probability tending to 1 as n → ∞, this random graph either is connected or consists of one giant component together with a few nodes in small planar components.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
We consider an infinite number of noninteracting lattice random walkers with the goal of determining statistical properties of the time, out of a total time T, that a single site has been occupied by n random walkers. Initially the random walkers are assumed uniformly distributed on the lattice except for the target site at the origin, which is unoccupied. The random-walk model is taken to be a continuous-time random walk and the pausing-time density at the target site is allowed to differ from the pausing-time density at other sites. We calculate the dependence of the mean time of occupancy by n random walkers as a function of n and the observation time T. We also find the variance for the cumulative time during which the site is unoccupied. The large-T behavior of the variance differs according as the random walk is transient or recurrent. It is shown that the variance is proportional to T at large T in three or more dimensions, it is proportional to T3/2 in one dimension and to TlnT in two dimensions.
Resumo:
Extreme times techniques, generally applied to nonequilibrium statistical mechanical processes, are also useful for a better understanding of financial markets. We present a detailed study on the mean first-passage time for the volatility of return time series. The empirical results extracted from daily data of major indices seem to follow the same law regardless of the kind of index thus suggesting an universal pattern. The empirical mean first-passage time to a certain level L is fairly different from that of the Wiener process showing a dissimilar behavior depending on whether L is higher or lower than the average volatility. All of this indicates a more complex dynamics in which a reverting force drives volatility toward its mean value. We thus present the mean first-passage time expressions of the most common stochastic volatility models whose approach is comparable to the random diffusion description. We discuss asymptotic approximations of these models and confront them to empirical results with a good agreement with the exponential Ornstein-Uhlenbeck model.
Resumo:
We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.
Resumo:
The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524) of test-day milk yield (TDMY) from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects), whereas the contemporary group, calving age (linear and quadratic effects) and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.