957 resultados para Probability distributions
Resumo:
The growth and dissolution dynamics of nonequilibrium crystal size distributions (CSDs) can be determined by solving the governing population balance equations (PBEs) representing reversible addition or dissociation. New PBEs are considered that intrinsically incorporate growth dispersion and yield complete CSDs. We present two approaches to solving the PBEs, a moment method and a numerical scheme. The results of the numerical scheme agree with the moment technique, which can be solved exactly when powers on mass-dependent growth and dissolution rate coefficients are either zero or one. The numerical scheme is more general and can be applied when the powers of the rate coefficients are non-integers or greater than unity. The influence of the size dependent rates on the time variation of the CSDs indicates that as equilibrium is approached, the CSDs become narrow when the exponent on the growth rate is less than the exponent on the dissolution rate. If the exponent on the growth rate is greater than the exponent on the dissolution rate, then the polydispersity continues to broaden. The computation method applies for crystals large enough that interfacial stability issues, such as ripening, can be neglected. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Distribution of fluorescence resonance energy transfer (FRET) efficiency between the two ends of a Lennard-Jones polymer chain both at equilibrium and during folding and unfolding has been calculated, for the first time, by Brownian dynamics simulations. The distribution of FRET efficiency becomes bimodal during folding of the extended state subsequent to a temperature quench, with the width of the distribution for the extended state broader than that for the folded state. The reverse process of unfolding subsequent to a upward temperature jump shows different characteristics. The distributions show significant viscosity dependence which can be tested against experiments.
Resumo:
The statistically steady humidity distribution resulting from an interaction of advection, modelled as an uncorrelated random walk of moist parcels on an isentropic surface, and a vapour sink, modelled as immediate condensation whenever the specific humidity exceeds a specified saturation humidity, is explored with theory and simulation. A source supplies moisture at the deep-tropical southern boundary of the domain and the saturation humidity is specified as a monotonically decreasing function of distance from the boundary. The boundary source balances the interior condensation sink, so that a stationary spatially inhomogeneous humidity distribution emerges. An exact solution of the Fokker-Planck equation delivers a simple expression for the resulting probability density function (PDF) of the wate-rvapour field and also the relative humidity. This solution agrees completely with a numerical simulation of the process, and the humidity PDF exhibits several features of interest, such as bimodality close to the source and unimodality further from the source. The PDFs of specific and relative humidity are broad and non-Gaussian. The domain-averaged relative humidity PDF is bimodal with distinct moist and dry peaks, a feature which we show agrees with middleworld isentropic PDFs derived from the ERA interim dataset. Copyright (C) 2011 Royal Meteorological Society
Resumo:
This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.
Resumo:
Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.
Resumo:
Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.
Resumo:
The paper outlines a technique for sensitive measurement of conduction phenomena in liquid dielectrics. The special features of this technique are the simplicity of the electrical system, the inexpensive instrumentation and the high accuracy. Detection, separation and analysis of a random function of current that is superimposed on the prebreakdown direct current forms the basis of this investigation. In this case, prebreakdown direct current is the output data of a test cell with large electrodes immersed in a liquid medium subjected to high direct voltages. Measurement of the probability-distribution function of a random fluctuating component of current provides a method that gives insight into the mechanism of conduction in a liquid medium subjected to high voltages and the processes that are responsible for the existence of the fluctuating component of the current.
Resumo:
We study the distribution of first passage time for Levy type anomalous diffusion. A fractional Fokker-Planck equation framework is introduced.For the zero drift case, using fractional calculus an explicit analytic solution for the first passage time density function in terms of Fox or H-functions is given. The asymptotic behaviour of the density function is discussed. For the nonzero drift case, we obtain an expression for the Laplace transform of the first passage time density function, from which the mean first passage time and variance are derived.
Resumo:
We reconsider standard uniaxial fatigue test data obtained from handbooks. Many S-N curve fits to such data represent the median life and exclude load-dependent variance in life. Presently available approaches for incorporating probabilistic aspects explicitly within the S-N curves have some shortcomings, which we discuss. We propose a new linear S-N fit with a prespecified failure probability, load-dependent variance, and reasonable behavior at extreme loads. We fit our parameters using maximum likelihood, show the reasonableness of the fit using Q-Q plots, and obtain standard error estimates via Monte Carlo simulations. The proposed fitting method may be used for obtaining S-N curves from the same data as already available, with the same mathematical form, but in cases in which the failure probability is smaller, say, 10 % instead of 50 %, and in which the fitted line is not parallel to the 50 % (median) line.
Resumo:
Two models for AF relaying, namely, fixed gain and fixed power relaying, have been extensively studied in the literature given their ability to harness spatial diversity. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay channel gain. In fixed power relaying, the relay transmit power is fixed, but its gain varies. We revisit and generalize the fundamental two-hop AF relaying model. We present an optimal scheme in which an average power constrained AF relay adapts its gain and transmit power to minimize the symbol error probability (SEP) at the destination. Also derived are insightful and practically amenable closed-form bounds for the optimal relay gain. We then analyze the SEP of MPSK, derive tight bounds for it, and characterize the diversity order for Rayleigh fading. Also derived is an SEP approximation that is accurate to within 0.1 dB. Extensive results show that the scheme yields significant energy savings of 2.0-7.7 dB at the source and relay. Optimal relay placement for the proposed scheme is also characterized, and is different from fixed gain or power relaying. Generalizations to MQAM and other fading distributions are also discussed.
Resumo:
The problem of identifying user intent has received considerable attention in recent years, particularly in the context of improving the search experience via query contextualization. Intent can be characterized by multiple dimensions, which are often not observed from query words alone. Accurate identification of Intent from query words remains a challenging problem primarily because it is extremely difficult to discover these dimensions. The problem is often significantly compounded due to lack of representative training sample. We present a generic, extensible framework for learning the multi-dimensional representation of user intent from the query words. The approach models the latent relationships between facets using tree structured distribution which leads to an efficient and convergent algorithm, FastQ, for identifying the multi-faceted intent of users based on just the query words. We also incorporated WordNet to extend the system capabilities to queries which contain words that do not appear in the training data. Empirical results show that FastQ yields accurate identification of intent when compared to a gold standard.
Resumo:
In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.
Resumo:
We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.
Resumo:
The recently discovered scalar resonance at the Large Hadron Collider is now almost confirmed to be a Higgs boson, whose CP properties are yet to be established. At the International Linear Collider with and without polarized beams, it may be possible to probe these properties at high precision. In this work, we study the possibility of probing departures from the pure CP-even case, by using the decay distributions in the process e(+)e(-) -> t (t) over bar Phi, with Phi mainly decaying into a b (b) over bar pair. We have compared the case of a minimal extension of the Standard Model case (model I) with an additional pseudoscalar degree of freedom, with a more realistic case namely the CP-violating two-Higgs doublet model (model II) that permits a more general description of the couplings. We have considered the International Linear Collider with root s = 800 GeV and integrated luminosity of 300 fb(-1). Our main findings are that even in the case of small departures from the CP-even case, the decay distributions are sensitive to the presence of a CP-odd component in model II, while it is difficult to probe these departures in model I unless the pseudoscalar component is very large. Noting that the proposed degrees of beam polarization increase the statistics, the process demonstrates the effective role of beam polarization in studies beyond the Standard Model. Further, our study shows that an indefinite CP Higgs would be a sensitive laboratory to physics beyond the Standard Model.