996 resultados para Summed probability functions
Resumo:
The problem of learning correct decision rules to minimize the probability of misclassification is a long-standing problem of supervised learning in pattern recognition. The problem of learning such optimal discriminant functions is considered for the class of problems where the statistical properties of the pattern classes are completely unknown. The problem is posed as a game with common payoff played by a team of mutually cooperating learning automata. This essentially results in a probabilistic search through the space of classifiers. The approach is inherently capable of learning discriminant functions that are nonlinear in their parameters also. A learning algorithm is presented for the team and convergence is established. It is proved that the team can obtain the optimal classifier to an arbitrary approximation. Simulation results with a few examples are presented where the team learns the optimal classifier.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
We report numerical and analytic results for the spatial survival probability for fluctuating one-dimensional interfaces with Edwards-Wilkinson or Kardar-Parisi-Zhang dynamics in the steady state. Our numerical results are obtained from analysis of steady-state profiles generated by integrating a spatially discretized form of the Edwards-Wilkinson equation to long times. We show that the survival probability exhibits scaling behavior in its dependence on the system size and the "sampling interval" used in the measurement for both "steady-state" and "finite" initial conditions. Analytic results for the scaling functions are obtained from a path-integral treatment of a formulation of the problem in terms of one-dimensional Brownian motion. A "deterministic approximation" is used to obtain closed-form expressions for survival probabilities from the formally exact analytic treatment. The resulting approximate analytic results provide a fairly good description of the numerical data.
Resumo:
The probability distribution of the eigenvalues of a second-order stochastic boundary value problem is considered. The solution is characterized in terms of the zeros of an associated initial value problem. It is further shown that the probability distribution is related to the solution of a first-order nonlinear stochastic differential equation. Solutions of this equation based on the theory of Markov processes and also on the closure approximation are presented. A string with stochastic mass distribution is considered as an example for numerical work. The theoretical probability distribution functions are compared with digital simulation results. The comparison is found to be reasonably good.
Resumo:
Consider L independent and identically distributed exponential random variables (r.vs) X-1, X-2 ,..., X-L and positive scalars b(1), b(2) ,..., b(L). In this letter, we present the probability density function (pdf), cumulative distribution function and the Laplace transform of the pdf of the composite r.v Z = (Sigma(L)(j=1) X-j)(2) / (Sigma(L)(j=1) b(j)X(j)). We show that the r.v Z appears in various communication systems such as i) maximal ratio combining of signals received over multiple channels with mismatched noise variances, ii)M-ary phase-shift keying with spatial diversity and imperfect channel estimation, and iii) coded multi-carrier code-division multiple access reception affected by an unknown narrow-band interference, and the statistics of the r.v Z derived here enable us to carry out the performance analysis of such systems in closed-form.
Resumo:
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
Resumo:
Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.
Resumo:
A hierarchical model is proposed for the joint moments of the passive scalar dissipation and the velocity dissipation in fluid turbulence. This model predicts that the joint probability density function (PDF) of the dissipations is a bivariate log-Poisson. An analytical calculation of the scaling exponents of structure functions of the passive scalar is carried out for this hierarchical model, showing a good agreement with the results of direct numerical simulations and experiments.
Resumo:
A location- and scale-invariant predictor is constructed which exhibits good probability matching for extreme predictions outside the span of data drawn from a variety of (stationary) general distributions. It is constructed via the three-parameter {\mu, \sigma, \xi} Generalized Pareto Distribution (GPD). The predictor is designed to provide matching probability exactly for the GPD in both the extreme heavy-tailed limit and the extreme bounded-tail limit, whilst giving a good approximation to probability matching at all intermediate values of the tail parameter \xi. The predictor is valid even for small sample sizes N, even as small as N = 3. The main purpose of this paper is to present the somewhat lengthy derivations which draw heavily on the theory of hypergeometric functions, particularly the Lauricella functions. Whilst the construction is inspired by the Bayesian approach to the prediction problem, it considers the case of vague prior information about both parameters and model, and all derivations are undertaken using sampling theory.
Resumo:
The excitation functions of two very similar reaction channels, Fe-58+Pb-208 ->(265)Hs+1n and Fe-58+Bi-209 ->(266)Mt+1n are studied in the framework of the dinuclear system conception. The fusion probabilities are found to be strongly subject to the structure of the driving potential. Usually the fusion probability is hindered by a barrier from the injection channel towards the compound nuclear configuration. The barrier towards the mass symmetrical direction, however, also plays an important role for the fusion probability, because the barrier hinders the quasi-fission, and therefore helps fusion.
Resumo:
By revealing close links among strong ergodicity, monotone, and the Feller–Reuter–Riley (FRR) transition functions, we prove that a monotone ergodic transition function is strongly ergodic if and only if it is not FRR. An easy to check criterion for a Feller minimal monotone chain to be strongly ergodic is then obtained. We further prove that a non-minimal ergodic monotone chain is always strongly ergodic. The applications of our results are illustrated using birth-and-death processes and branching processes.
Resumo:
The greatest relaxation time for an assembly of three- dimensional rigid rotators in an axially symmetric bistable potential is obtained exactly in terms of continued fractions as a sum of the zero frequency decay functions (averages of the Legendre polynomials) of the system. This is accomplished by studying the entire time evolution of the Green function (transition probability) by expanding the time dependent distribution as a Fourier series and proceeding to the zero frequency limit of the Laplace transform of that distribution. The procedure is entirely analogous to the calculation of the characteristic time of the probability evolution (the integral of the configuration space probability density function with respect to the position co-ordinate) for a particle undergoing translational diffusion in a potential; a concept originally used by Malakhov and Pankratov (Physica A 229 (1996) 109). This procedure allowed them to obtain exact solutions of the Kramers one-dimensional translational escape rate problem for piecewise parabolic potentials. The solution was accomplished by posing the problem in terms of the appropriate Sturm-Liouville equation which could be solved in terms of the parabolic cylinder functions. The method (as applied to rotational problems and posed in terms of recurrence relations for the decay functions, i.e., the Brinkman approach c.f. Blomberg, Physica A 86 (1977) 49, as opposed to the Sturm-Liouville one) demonstrates clearly that the greatest relaxation time unlike the integral relaxation time which is governed by a single decay function (albeit coupled to all the others in non-linear fashion via the underlying recurrence relation) is governed by a sum of decay functions. The method is easily generalized to multidimensional state spaces by matrix continued fraction methods allowing one to treat non-axially symmetric potentials, where the distribution function is governed by two state variables. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Incoherent Thomson scattering (ITS) provides a nonintrusive diagnostic for the determination of one-dimensional (1D) electron velocity distribution in plasmas. When the ITS spectrum is Gaussian its interpretation as a three-dimensional (3D) Maxwellian velocity distribution is straightforward. For more complex ITS line shapes derivation of the corresponding 3D velocity distribution and electron energy probability distribution function is more difficult. This article reviews current techniques and proposes an approach to making the transformation between a 1D velocity distribution and the corresponding 3D energy distribution. Previous approaches have either transformed the ITS spectra directly from a 1D distribution to a 3D or fitted two Gaussians assuming a Maxwellian or bi-Maxwellian distribution. Here, the measured ITS spectrum transformed into a 1D velocity distribution and the probability of finding a particle with speed within 0 and given value v is calculated. The differentiation of this probability function is shown to be the normalized electron velocity distribution function. (C) 2003 American Institute of Physics.