918 resultados para Pseudo-random
Resumo:
We present here a nonbiased probabilistic method that allows us to consistently analyze knottedness of linear random walks with up to several hundred noncorrelated steps. The method consists of analyzing the spectrum of knots formed by multiple closures of the same open walk through random points on a sphere enclosing the walk. Knottedness of individual "frozen" configurations of linear chains is therefore defined by a characteristic spectrum of realizable knots. We show that in the great majority of cases this method clearly defines the dominant knot type of a walk, i.e., the strongest component of the spectrum. In such cases, direct end-to-end closure creates a knot that usually coincides with the knot type that dominates the random closure spectrum. Interestingly, in a very small proportion of linear random walks, the knot type is not clearly defined. Such walks can be considered as residing in a border zone of the configuration space of two or more knot types. We also characterize the scaling behavior of linear random knots.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
Abstract: The art of Alcibaides: Bertrand de Jouvenel's The Pseudo-Alcibiades as an apology of politicians
Resumo:
Limited migration results in kin selective pressure on helping behaviors under a wide range of ecological, demographic and life-history situations. However, such genetically determined altruistic helping can evolve only when migration is not too strong and group size is not too large. Cultural inheritance of helping behaviors may allow altruistic helping to evolve in groups of larger size because cultural transmission has the potential to markedly decrease the variance within groups and augment the variance between groups. Here, we study the co-evolution of culturally inherited altruistic helping behaviors and two alternative cultural transmission rules for such behaviors. We find that conformist transmission, where individuals within groups tend to copy prevalent cultural variants (e.g., beliefs or values), has a strong adverse effect on the evolution of culturally inherited helping traits. This finding is at variance with the commonly held view that conformist transmission is a crucial factor favoring the evolution of altruistic helping in humans. By contrast, we find that under one-to-many transmission, where individuals within groups tend to copy a "leader" (or teacher), altruistic helping can evolve in groups of any size, although the cultural transmission rule itself hitchhikes rather weakly with a selected helping trait. Our results suggest that culturally determined helping behaviors are more likely to be driven by "leaders" than by popularity, but the emergence and stability of the cultural transmission rules themselves should be driven by some extrinsic factors.
Resumo:
Treball final de carrera basat en el reconeixement de punts clau en imatges mitjançant l'algorisme Random Ferns.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
We present a model in which particles (or individuals of a biological population) disperse with a rest time between consecutive motions (or migrations) which may take several possible values from a discrete set. Particles (or individuals) may also react (or reproduce). We derive a new equation for the effective rest time T˜ of the random walk. Application to the neolithic transition in Europe makes it possible to derive more realistic theoretical values for its wavefront speed than those following from the single-delayed framework presented previously [J. Fort and V. Méndez, Phys. Rev. Lett. 82, 867 (1999)]. The new results are consistent with the archaeological observations of this important historical process
Resumo:
We generalize a previous model of time-delayed reaction–diffusion fronts (Fort and Méndez 1999 Phys. Rev. Lett. 82 867) to allow for a bias in the microscopic random walk of particles or individuals. We also present a second model which takes the time order of events (diffusion and reproduction) into account. As an example, we apply them to the human invasion front across the USA in the 19th century. The corrections relative to the previous model are substantial. Our results are relevant to physical and biological systems with anisotropic fronts, including particle diffusion in disordered lattices, population invasions, the spread of epidemics, etc
Resumo:
Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.