3 resultados para stochastic process
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function. Utilizing suitable multiscale tests, we construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.
Resumo:
The variables involved in the equations that describe realistic synaptic dynamics always vary in a limited range. Their boundedness makes the synapses forgetful, not for the mere passage of time, but because new experiences overwrite old memories. The forgetting rate depends on how many synapses are modified by each new experience: many changes means fast learning and fast forgetting, whereas few changes means slow learning and long memory retention. Reducing the average number of modified synapses can extend the memory span at the price of a reduced amount of information stored when a new experience is memorized. Every trick which allows to slow down the learning process in a smart way can improve the memory performance. We review some of the tricks that allow to elude fast forgetting (oblivion). They are based on the stochastic selection of the synapses whose modifications are actually consolidated following each new experience. In practice only a randomly selected, small fraction of the synapses eligible for an update are actually modified. This allows to acquire the amount of information necessary to retrieve the memory without compromising the retention of old experiences. The fraction of modified synapses can be further reduced in a smart way by changing synapses only when it is really necessary, i.e. when the post-synaptic neuron does not respond as desired. Finally we show that such a stochastic selection emerges naturally from spike driven synaptic dynamics which read noisy pre and post-synaptic neural activities. These activities can actually be generated by a chaotic system.
Resumo:
Stochastic simulation is an important and practical technique for computing probabilities of rare events, like the payoff probability of a financial option, the probability that a queue exceeds a certain level or the probability of ruin of the insurer's risk process. Rare events occur so infrequently, that they cannot be reasonably recorded during a standard simulation procedure: specifc simulation algorithms which thwart the rarity of the event to simulate are required. An important algorithm in this context is based on changing the sampling distribution and it is called importance sampling. Optimal Monte Carlo algorithms for computing rare event probabilities are either logarithmic eficient or possess bounded relative error.