960 resultados para Probability Metrics
Resumo:
We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.
Resumo:
In this work, we study the emission of tensor-type gravitational degrees of freedom from a higher-dimensional, simply rotating black hole in the bulk. The decoupled radial part of the corresponding field equation is first solved analytically in the limit of low-energy emitted particles and low-angular momentum of the black hole in order to derive the absorption probability. Both the angular and radial equations are then solved numerically, and the comparison of the analytical and numerical results shows a very good agreement in the low and intermediate energy regimes. By using our exact, numerical results we compute the energy and angular-momentum emission rates and their dependence on the spacetime parameters such as the number of additional spacelike dimensions and the angular momentum of the black hole. Particular care is given to the convergence of our results in terms of the number of modes taken into account in the calculation and the multiplicity of graviton tensor modes that correspond to the same angular-momentum numbers.
Resumo:
The optimal discrimination of nonorthogonal quantum states with minimum error probability is a fundamental task in quantum measurement theory as well as an important primitive in optical communication. In this work, we propose and experimentally realize a new and simple quantum measurement strategy capable of discriminating two coherent states with smaller error probabilities than can be obtained using the standard measurement devices: the Kennedy receiver and the homodyne receiver.
Resumo:
We investigate stability of the D-dimensional Reissner-Nordstrom-anti-de Sitter metrics as solutions of the Einstein-Maxwell equations. We have shown that asymptotically anti-de Sitter (AdS) black holes are dynamically stable for all values of charge and anti-de Sitter radius in D=5,6...11 dimensional space-times. This does not contradict dynamical instability of RNAdS black holes found by Gubser in N=8 gauged supergravity, because the latter instability comes from the tachyon mode of the scalar field, coupled to the system. Asymptotically AdS black holes are known to be thermodynamically unstable for some region of parameters, yet, as we have shown here, they are stable against gravitational perturbations.
Resumo:
In this work we investigate knowledge acquisition as performed by multiple agents interacting as they infer, under the presence of observation errors, respective models of a complex system. We focus the specific case in which, at each time step, each agent takes into account its current observation as well as the average of the models of its neighbors. The agents are connected by a network of interaction of Erdos-Renyi or Barabasi-Albert type. First, we investigate situations in which one of the agents has a different probability of observation error (higher or lower). It is shown that the influence of this special agent over the quality of the models inferred by the rest of the network can be substantial, varying linearly with the respective degree of the agent with different estimation error. In case the degree of this agent is taken as a respective fitness parameter, the effect of the different estimation error is even more pronounced, becoming superlinear. To complement our analysis, we provide the analytical solution of the overall performance of the system. We also investigate the knowledge acquisition dynamic when the agents are grouped into communities. We verify that the inclusion of edges between agents (within a community) having higher probability of observation error promotes the loss of quality in the estimation of the agents in the other communities.
Resumo:
In one-component Abelian sandpile models, the toppling probabilities are independent quantities. This is not the case in multicomponent models. The condition of associativity of the underlying Abelian algebras imposes nonlinear relations among the toppling probabilities. These relations are derived for the case of two-component quadratic Abelian algebras. We show that Abelian sandpile models with two conservation laws have only trivial avalanches.
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
We investigate the performance of a variant of Axelrod's model for dissemination of culture-the Adaptive Culture Heuristic (ACH)-on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N(1/4) so that the number of agents must increase with the fourth power of the problem size, N proportional to F(4), to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F(6) which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.
Resumo:
Fontanari introduced [Phys. Rev. Lett. 91, 218101 (2003)] a model for studying Muller's ratchet phenomenon in growing asexual populations. They studied two situations, either including a death probability for each newborn or not, but were able to find analytical (recursive) expressions only in the no-decay case. In this Brief Report a branching process formalism is used to find recurrence equations that generalize the analytical results of the original paper besides confirming the interesting effects their simulations revealed.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3 x 3 and 2 x 2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.
Resumo:
Context tree models have been introduced by Rissanen in [25] as a parsimonious generalization of Markov models. Since then, they have been widely used in applied probability and statistics. The present paper investigates non-asymptotic properties of two popular procedures of context tree estimation: Rissanen's algorithm Context and penalized maximum likelihood. First showing how they are related, we prove finite horizon bounds for the probability of over- and under-estimation. Concerning overestimation, no boundedness or loss-of-memory conditions are required: the proof relies on new deviation inequalities for empirical probabilities of independent interest. The under-estimation properties rely on classical hypotheses for processes of infinite memory. These results improve on and generalize the bounds obtained in Duarte et al. (2006) [12], Galves et al. (2008) [18], Galves and Leonardi (2008) [17], Leonardi (2010) [22], refining asymptotic results of Buhlmann and Wyner (1999) [4] and Csiszar and Talata (2006) [9]. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We consider the one-dimensional asymmetric simple exclusion process (ASEP) in which particles jump to the right at rate p is an element of (1/2, 1.] and to the left at rate 1 - p, interacting by exclusion. In the initial state there is a finite region such that to the left of this region all sites are occupied and to the right of it all sites are empty. Under this initial state, the hydrodynamical limit of the process converges to the rarefaction fan of the associated Burgers equation. In particular suppose that the initial state has first-class particles to the left of the origin, second-class particles at sites 0 and I, and holes to the right of site I. We show that the probability that the two second-class particles eventually collide is (1 + p)/(3p), where a collision occurs when one of the particles attempts to jump over the other. This also corresponds to the probability that two ASEP processes. started from appropriate initial states and coupled using the so-called ""basic coupling,"" eventually reach the same state. We give various other results about the behaviour of second-class particles in the ASEP. In the totally asymmetric case (p = 1) we explain a further representation in terms of a multi-type particle system, and also use the collision result to derive the probability of coexistence of both clusters in a two-type version of the corner growth model.
Resumo:
The dynamical discrete web (DyDW), introduced in the recent work of Howitt and Warren, is a system of coalescing simple symmetric one-dimensional random walks which evolve in an extra continuous dynamical time parameter tau. The evolution is by independent updating of the underlying Bernoulli variables indexed by discrete space-time that define the discrete web at any fixed tau. In this paper, we study the existence of exceptional (random) values of tau where the paths of the web do not behave like usual random walks and the Hausdorff dimension of the set of such exceptional tau. Our results are motivated by those about exceptional times for dynamical percolation in high dimension by Haggstrom, Peres and Steif, and in dimension two by Schramm and Steif. The exceptional behavior of the walks in the DyDW is rather different from the situation for the dynamical random walks of Benjamini, Haggstrom, Peres and Steif. For example, we prove that the walk from the origin S(0)(tau) violates the law of the iterated logarithm (LIL) on a set of tau of Hausdorff dimension one. We also discuss how these and other results should extend to the dynamical Brownian web, the natural scaling limit of the DyDW. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.