920 resultados para probability distribution
Resumo:
We examine the statistics of three interacting optical solitons under the effects of amplifier noise and filtering. We derive rigorously the Fokker-Planck equation that governs the probability distribution of soliton parameters.
Resumo:
We examine the statistics of three interacting optical solitons under the effects of amplifier noise and filtering. We derive rigorously the Fokker-Planck equation that governs the probability distribution of soliton parameters.
Resumo:
This work attempts to shed light to the fundamental concepts behind the stability of Multi-Agent Systems. We view the system as a discrete time Markov chain with a potentially unknown transitional probability distribution. The system will be considered to be stable when its state has converged to an equilibrium distribution. Faced with the non-trivial task of establishing the convergence to such a distribution, we propose a hypothesis testing approach according to which we test whether the convergence of a particular system metric has occurred. We describe some artificial multi-agent ecosystems that were developed and we present results based on these systems which confirm that this approach qualitatively agrees with our intuition.
Resumo:
This work was supported by the Bulgarian National Science Fund under grant BY-TH-105/2005.
Resumo:
Motivation: Within bioinformatics, the textual alignment of amino acid sequences has long dominated the determination of similarity between proteins, with all that implies for shared structure, function, and evolutionary descent. Despite the relative success of modern-day sequence alignment algorithms, so-called alignment-free approaches offer a complementary means of determining and expressing similarity, with potential benefits in certain key applications, such as regression analysis of protein structure-function studies, where alignment-base similarity has performed poorly. Results: Here, we offer a fresh, statistical physics-based perspective focusing on the question of alignment-free comparison, in the process adapting results from “first passage probability distribution” to summarize statistics of ensemble averaged amino acid propensity values. In this paper, we introduce and elaborate this approach.
Resumo:
* This work was financially supported by RFBR-04-01-00858.
Resumo:
* This work was financially supported by RFBR-04-01-00858.
Resumo:
An experimental comparison of information features used by neural network is performed. The sensing method was used. Suboptimal classifier agreeable to the gaussian model of the training data was used as a probe. Neural nets with architectures of perceptron and feedforward net with one hidden layer were used. The experiments were carried out with spatial ultrasonic data, which are used for car’s passenger safety system neural controller learning. In this paper we show that a neural network doesn’t fully make use of gaussian components, which are first two moment coefficients of probability distribution. On the contrary, the network can find more complicated regularities inside data vectors and thus shows better results than suboptimal classifier. The parallel connection of suboptimal classifier improves work of modular neural network whereas its connection to the network input improves the specialization effect during training.
Resumo:
Tests for random walk behaviour in the Italian stock market are presented, based on an investigation of the fractal properties of the log return series for the Mibtel index. The random walk hypothesis is evaluated against alternatives accommodating either unifractality or multifractality. Critical values for the test statistics are generated using Monte Carlo simulations of random Gaussian innovations. Evidence is reported of multifractality, and the departure from random walk behaviour is statistically significant on standard criteria. The observed pattern is attributed primarily to fat tails in the return probability distribution, associated with volatility clustering in returns measured over various time scales. © 2009 Elsevier Inc. All rights reserved.
Resumo:
We consider an uncertain version of the scheduling problem to sequence set of jobs J on a single machine with minimizing the weighted total flow time, provided that processing time of a job can take on any real value from the given closed interval. It is assumed that job processing time is unknown random variable before the actual occurrence of this time, where probability distribution of such a variable between the given lower and upper bounds is unknown before scheduling. We develop the dominance relations on a set of jobs J. The necessary and sufficient conditions for a job domination may be tested in polynomial time of the number n = |J| of jobs. If there is no a domination within some subset of set J, heuristic procedure to minimize the weighted total flow time is used for sequencing the jobs from such a subset. The computational experiments for randomly generated single-machine scheduling problems with n ≤ 700 show that the developed dominance relations are quite helpful in minimizing the weighted total flow time of n jobs with uncertain processing times.
Resumo:
Dedicated to Professor A.M. Mathai on the occasion of his 75-th birthday. Mathematics Subject Classi¯cation 2010: 26A33, 44A10, 33C60, 35J10.
Resumo:
The article analyzes the contribution of stochastic thermal fluctuations in the attachment times of the immature T-cell receptor TCR: peptide-major-histocompatibility-complex pMHC immunological synapse bond. The key question addressed here is the following: how does a synapse bond remain stabilized in the presence of high-frequency thermal noise that potentially equates to a strong detaching force? Focusing on the average time persistence of an immature synapse, we show that the high-frequency nodes accompanying large fluctuations are counterbalanced by low-frequency nodes that evolve over longer time periods, eventually leading to signaling of the immunological synapse bond primarily decided by nodes of the latter type. Our analysis shows that such a counterintuitive behavior could be easily explained from the fact that the survival probability distribution is governed by two distinct phases, corresponding to two separate time exponents, for the two different time regimes. The relatively shorter timescales correspond to the cohesion:adhesion induced immature bond formation whereas the larger time reciprocates the association:dissociation regime leading to TCR:pMHC signaling. From an estimate of the bond survival probability, we show that, at shorter timescales, this probability PΔ(τ) scales with time τ as a universal function of a rescaled noise amplitude DΔ2, such that PΔ(τ)∼τ-(ΔD+12),Δ being the distance from the mean intermembrane (T cell:Antigen Presenting Cell) separation distance. The crossover from this shorter to a longer time regime leads to a universality in the dynamics, at which point the survival probability shows a different power-law scaling compared to the one at shorter timescales. In biological terms, such a crossover indicates that the TCR:pMHC bond has a survival probability with a slower decay rate than the longer LFA-1:ICAM-1 bond justifying its stability.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
We propose a family of attributed graph kernels based on mutual information measures, i.e., the Jensen-Tsallis (JT) q-differences (for q ∈ [1,2]) between probability distributions over the graphs. To this end, we first assign a probability to each vertex of the graph through a continuous-time quantum walk (CTQW). We then adopt the tree-index approach [1] to strengthen the original vertex labels, and we show how the CTQW can induce a probability distribution over these strengthened labels. We show that our JT kernel (for q = 1) overcomes the shortcoming of discarding non-isomorphic substructures arising in the R-convolution kernels. Moreover, we prove that the proposed JT kernels generalize the Jensen-Shannon graph kernel [2] (for q = 1) and the classical subtree kernel [3] (for q = 2), respectively. Experimental evaluations demonstrate the effectiveness and efficiency of the JT kernels.
Resumo:
2000 Mathematics Subject Classification: 60J80.