68 resultados para Probability distributions
Resumo:
Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.
Resumo:
The paper outlines a technique for sensitive measurement of conduction phenomena in liquid dielectrics. The special features of this technique are the simplicity of the electrical system, the inexpensive instrumentation and the high accuracy. Detection, separation and analysis of a random function of current that is superimposed on the prebreakdown direct current forms the basis of this investigation. In this case, prebreakdown direct current is the output data of a test cell with large electrodes immersed in a liquid medium subjected to high direct voltages. Measurement of the probability-distribution function of a random fluctuating component of current provides a method that gives insight into the mechanism of conduction in a liquid medium subjected to high voltages and the processes that are responsible for the existence of the fluctuating component of the current.
Resumo:
We study the distribution of first passage time for Levy type anomalous diffusion. A fractional Fokker-Planck equation framework is introduced.For the zero drift case, using fractional calculus an explicit analytic solution for the first passage time density function in terms of Fox or H-functions is given. The asymptotic behaviour of the density function is discussed. For the nonzero drift case, we obtain an expression for the Laplace transform of the first passage time density function, from which the mean first passage time and variance are derived.
Resumo:
We reconsider standard uniaxial fatigue test data obtained from handbooks. Many S-N curve fits to such data represent the median life and exclude load-dependent variance in life. Presently available approaches for incorporating probabilistic aspects explicitly within the S-N curves have some shortcomings, which we discuss. We propose a new linear S-N fit with a prespecified failure probability, load-dependent variance, and reasonable behavior at extreme loads. We fit our parameters using maximum likelihood, show the reasonableness of the fit using Q-Q plots, and obtain standard error estimates via Monte Carlo simulations. The proposed fitting method may be used for obtaining S-N curves from the same data as already available, with the same mathematical form, but in cases in which the failure probability is smaller, say, 10 % instead of 50 %, and in which the fitted line is not parallel to the 50 % (median) line.
Resumo:
Two models for AF relaying, namely, fixed gain and fixed power relaying, have been extensively studied in the literature given their ability to harness spatial diversity. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay channel gain. In fixed power relaying, the relay transmit power is fixed, but its gain varies. We revisit and generalize the fundamental two-hop AF relaying model. We present an optimal scheme in which an average power constrained AF relay adapts its gain and transmit power to minimize the symbol error probability (SEP) at the destination. Also derived are insightful and practically amenable closed-form bounds for the optimal relay gain. We then analyze the SEP of MPSK, derive tight bounds for it, and characterize the diversity order for Rayleigh fading. Also derived is an SEP approximation that is accurate to within 0.1 dB. Extensive results show that the scheme yields significant energy savings of 2.0-7.7 dB at the source and relay. Optimal relay placement for the proposed scheme is also characterized, and is different from fixed gain or power relaying. Generalizations to MQAM and other fading distributions are also discussed.
Resumo:
The problem of identifying user intent has received considerable attention in recent years, particularly in the context of improving the search experience via query contextualization. Intent can be characterized by multiple dimensions, which are often not observed from query words alone. Accurate identification of Intent from query words remains a challenging problem primarily because it is extremely difficult to discover these dimensions. The problem is often significantly compounded due to lack of representative training sample. We present a generic, extensible framework for learning the multi-dimensional representation of user intent from the query words. The approach models the latent relationships between facets using tree structured distribution which leads to an efficient and convergent algorithm, FastQ, for identifying the multi-faceted intent of users based on just the query words. We also incorporated WordNet to extend the system capabilities to queries which contain words that do not appear in the training data. Empirical results show that FastQ yields accurate identification of intent when compared to a gold standard.
Resumo:
In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.
Resumo:
We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.
Resumo:
The recently discovered scalar resonance at the Large Hadron Collider is now almost confirmed to be a Higgs boson, whose CP properties are yet to be established. At the International Linear Collider with and without polarized beams, it may be possible to probe these properties at high precision. In this work, we study the possibility of probing departures from the pure CP-even case, by using the decay distributions in the process e(+)e(-) -> t (t) over bar Phi, with Phi mainly decaying into a b (b) over bar pair. We have compared the case of a minimal extension of the Standard Model case (model I) with an additional pseudoscalar degree of freedom, with a more realistic case namely the CP-violating two-Higgs doublet model (model II) that permits a more general description of the couplings. We have considered the International Linear Collider with root s = 800 GeV and integrated luminosity of 300 fb(-1). Our main findings are that even in the case of small departures from the CP-even case, the decay distributions are sensitive to the presence of a CP-odd component in model II, while it is difficult to probe these departures in model I unless the pseudoscalar component is very large. Noting that the proposed degrees of beam polarization increase the statistics, the process demonstrates the effective role of beam polarization in studies beyond the Standard Model. Further, our study shows that an indefinite CP Higgs would be a sensitive laboratory to physics beyond the Standard Model.
Resumo:
In this paper we calculate the potential for a prolate spheroidal distribution as in a dark matter halo with a radially varying eccentricity. This is obtained by summing up the shell-by-shell contributions of isodensity surfaces, which are taken to be concentric and with a common polar axis and with an axis ratio that varies with radius. Interestingly, the constancy of potential inside a shell is shown to be a good approximation even when the isodensity contours are dissimilar spheroids, as long as the radial variation in eccentricity is small as seen in realistic systems. We consider three cases where the isodensity contours are more prolate at large radii, or are less prolate or have a constant eccentricity. Other relevant physical quantities like the rotation velocity, the net orbital and vertical frequency due to the halo and an exponential disc of finite thickness embedded in it are obtained. We apply this to the kinematical origin of Galactic warp, and show that a prolate-shaped halo is not conducive to making long-lived warps - contrary to what has been proposed in the literature. The results for a prolate mass distribution with a variable axis ratio obtained are general, and can be applied to other astrophysical systems, such as prolate bars, for a more realistic treatment.
Resumo:
Given a metric space with a Borel probability measure, for each integer N, we obtain a probability distribution on N x N distance matrices by considering the distances between pairs of points in a sample consisting of N points chosen independently from the metric space with respect to the given measure. We show that this gives an asymptotically bi-Lipschitz relation between metric measure spaces and the corresponding distance matrices. This is an effective version of a result of Vershik that metric measure spaces are determined by associated distributions on infinite random matrices.
Resumo:
The effects of the initial height on the temporal persistence probability of steady-state height fluctuations in up-down symmetric linear models of surface growth are investigated. We study the (1 + 1)-dimensional Family model and the (1 + 1)-and (2 + 1)-dimensional larger curvature (LC) model. Both the Family and LC models have up-down symmetry, so the positive and negative persistence probabilities in the steady state, averaged over all values of the initial height h(0), are equal to each other. However, these two probabilities are not equal if one considers a fixed nonzero value of h(0). Plots of the positive persistence probability for negative initial height versus time exhibit power-law behavior if the magnitude of the initial height is larger than the interface width at saturation. By symmetry, the negative persistence probability for positive initial height also exhibits the same behavior. The persistence exponent that describes this power-law decay decreases as the magnitude of the initial height is increased. The dependence of the persistence probability on the initial height, the system size, and the discrete sampling time is found to exhibit scaling behavior.
Resumo:
In underlay cognitive radio (CR), a secondary user (SU) can transmit concurrently with a primary user (PU) provided that it does not cause excessive interference at the primary receiver (PRx). The interference constraint fundamentally changes how the SU transmits, and makes link adaptation in underlay CR systems different from that in conventional wireless systems. In this paper, we develop a novel, symbol error probability (SEP)-optimal transmit power adaptation policy for an underlay CR system that is subject to two practically motivated constraints, namely, a peak transmit power constraint and an interference outage probability constraint. For the optimal policy, we derive its SEP and a tight upper bound for MPSK and MQAM constellations when the links from the secondary transmitter (STx) to its receiver and to the PRx follow the versatile Nakagami-m fading model. We also characterize the impact of imperfectly estimating the STx-PRx link on the SEP and the interference. Extensive simulation results are presented to validate the analysis and evaluate the impact of the constraints, fading parameters, and imperfect estimates.
Resumo:
Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.
Resumo:
We present a nonequilibrium strong-coupling approach to inhomogeneous systems of ultracold atoms in optical lattices. We demonstrate its application to the Mott-insulating phase of a two-dimensional Fermi-Hubbard model in the presence of a trap potential. Since the theory is formulated self-consistently, the numerical implementation relies on a massively parallel evaluation of the self-energy and the Green's function at each lattice site, employing thousands of CPUs. While the computation of the self-energy is straightforward to parallelize, the evaluation of the Green's function requires the inversion of a large sparse 10(d) x 10(d) matrix, with d > 6. As a crucial ingredient, our solution heavily relies on the smallness of the hopping as compared to the interaction strength and yields a widely scalable realization of a rapidly converging iterative algorithm which evaluates all elements of the Green's function. Results are validated by comparing with the homogeneous case via the local-density approximation. These calculations also show that the local-density approximation is valid in nonequilibrium setups without mass transport.