163 resultados para Random process
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combinationof several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. We offer an analog result for the prediction of stationary gaussian processes.
Resumo:
Previous indirect evidence suggests that impulses towards pro-socialbehavior are diminished when an external authority is responsiblefor an outcome. The responsibility-alleviation effect states that ashift of responsibility to an external authority dampens internalimpulses toward honesty, loyalty, or generosity. In a gift-exchangeexperiment, we find that subjects respond with more generosity(higher effort) when a wage is determined by a random process thanwhen it is assigned by a third party, indicating that even a slightshift in perceived responsibility for the final payoffs can changebehavior. Responsibility-alleviation is a factor in economicenvironments featuring substantial personal interaction.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call
Resumo:
It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call
Resumo:
In this paper we explore the effect of bounded rationality on the convergence of individual behavior toward equilibrium. In the context of a Cournot game with a unique and symmetric Nash equilibrium, firms are modeled as adaptive economic agents through a genetic algorithm. Computational experiments show that (1) there is remarkable heterogeneity across identical but boundedly rational agents; (2) such individual heterogeneity is not simply a consequence of the random elements contained in the genetic algorithm; (3) the more rational agents are in terms of memory abilities and pre-play evaluation of strategies, the less heterogeneous they are in their actions. At the limit case of full rationality, the outcome converges to the standard result of uniform individual behavior.
Resumo:
Counting labelled planar graphs, and typical properties of random labelled planar graphs, have received much attention recently. We start the process here of extending these investigations to graphs embeddable on any fixed surface S. In particular we show that the labelled graphs embeddable on S have the same growth constant as for planar graphs, and the same holds for unlabelled graphs. Also, if we pick a graph uniformly at random from the graphs embeddable on S which have vertex set {1, . . . , n}, then with probability tending to 1 as n → ∞, this random graph either is connected or consists of one giant component together with a few nodes in small planar components.
Resumo:
One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram
Resumo:
The space subdivision in cells resulting from a process of random nucleation and growth is a subject of interest in many scientific fields. In this paper, we deduce the expected value and variance of these distributions while assuming that the space subdivision process is in accordance with the premises of the Kolmogorov-Johnson-Mehl-Avrami model. We have not imposed restrictions on the time dependency of nucleation and growth rates. We have also developed an approximate analytical cell size probability density function. Finally, we have applied our approach to the distributions resulting from solid phase crystallization under isochronal heating conditions
Resumo:
We present exact equations and expressions for the first-passage-time statistics of dynamical systems that are a combination of a diffusion process and a random external force modeled as dichotomous Markov noise. We prove that the mean first passage time for this system does not show any resonantlike behavior.
Resumo:
A dynamical model based on a continuous addition of colored shot noises is presented. The resulting process is colored and non-Gaussian. A general expression for the characteristic function of the process is obtained, which, after a scaling assumption, takes on a form that is the basis of the results derived in the rest of the paper. One of these is an expansion for the cumulants, which are all finite, subject to mild conditions on the functions defining the process. This is in contrast with the Lévy distribution¿which can be obtained from our model in certain limits¿which has no finite moments. The evaluation of the spectral density and the form of the probability density function in the tails of the distribution shows that the model exhibits a power-law spectrum and long tails in a natural way. A careful analysis of the characteristic function shows that it may be separated into a part representing a Lévy process together with another part representing the deviation of our model from the Lévy process. This
Resumo:
The continuous-time random walk (CTRW) formalism can be adapted to encompass stochastic processes with memory. In this paper we will show how the random combination of two different unbiased CTRWs can give rise to a process with clear drift, if one of them is a CTRW with memory. If one identifies the other one as noise, the effect can be thought of as a kind of stochastic resonance. The ultimate origin of this phenomenon is the same as that of the Parrondo paradox in game theory.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
The scope of this work is the systematic study of the silicidation process affecting tungsten filaments at high temperature (1900ºC) used for silane decomposition in the hot-wire chemical vapour deposition technique (HWCVD). The correlation between the electrical resistance evolution of the filaments, Rfil(t), and the different stages of the their silicidation process is exposed. Said stages correspond to: the rapid formation of two WSi2 fronts at the cold ends of the filaments and their further propagation towards the middle of the filaments; and, regarding the hot central portion of the filaments: a initial stage of silicon dissolution into the tungsten bulk, with a random duration for as-manufactured filaments, followed by the inhomogeneous nucleation of W5Si3 (which is later replaced by WSi2) and its further growth towards the filaments core. An electrical model is used to obtain real-time information about the current status of the filaments silicidation process by simply monitoring their Rfil(t) evolution during the HWCVD process. It is shown that implementing an annealing pre-treatment to the filaments leads to a clearly repetitive trend in the monitored Rfil(t) signatures. The influence of hydrogen dilution of silane on the filaments silicidation process is also discussed.
Resumo:
We present a model in which particles (or individuals of a biological population) disperse with a rest time between consecutive motions (or migrations) which may take several possible values from a discrete set. Particles (or individuals) may also react (or reproduce). We derive a new equation for the effective rest time T˜ of the random walk. Application to the neolithic transition in Europe makes it possible to derive more realistic theoretical values for its wavefront speed than those following from the single-delayed framework presented previously [J. Fort and V. Méndez, Phys. Rev. Lett. 82, 867 (1999)]. The new results are consistent with the archaeological observations of this important historical process