68 resultados para Probability distributions
Resumo:
In this letter, we analyze the end-to-end average bit error probability (ABEP) of space shift keying (SSK) in cooperative relaying with decode-and-forward (DF) protocol, considering multiple relays with a threshold based best relay selection, and selection combining of direct and relayed paths at the destination. We derive an exact analytical expression for the end-to-end ABEP in closed-form for binary SSK, where analytical results agree with simulation results. For non-binary SSK, approximate analytical and simulation results are presented.
Resumo:
Scaling behaviour has been observed at mesoscopic level irrespective of crystal structure, type of boundary and operative micro-mechanisms like slip and twinning. The presence of scaling at the meso-scale accompanied with that at the nano-scale clearly demonstrates the intrinsic spanning for different deformation processes and a true universal nature of scaling. The origin of a 1/2 power law in deformation of crystalline materials in terms of misorientation proportional to square root of strain is attributed to importance of interfaces in deformation processes. It is proposed that materials existing in three dimensional Euclidean spaces accommodate plastic deformation by one dimensional dislocations and their interaction with two dimensional interfaces at different length scales. This gives rise to a 1/2 power law scaling in materials. This intrinsic relationship can be incorporated in crystal plasticity models that aim to span different length and time scales to predict the deformation response of crystalline materials accurately.
Resumo:
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.
Resumo:
In this work, the hypothesis testing problem of spectrum sensing in a cognitive radio is formulated as a Goodness-of-fit test against the general class of noise distributions used in most communications-related applications. A simple, general, and powerful spectrum sensing technique based on the number of weighted zero-crossings in the observations is proposed. For the cases of uniform and exponential weights, an expression for computing the near-optimal detection threshold that meets a given false alarm probability constraint is obtained. The proposed detector is shown to be robust to two commonly encountered types of noise uncertainties, namely, the noise model uncertainty, where the PDF of the noise process is not completely known, and the noise parameter uncertainty, where the parameters associated with the noise PDF are either partially or completely unknown. Simulation results validate our analysis, and illustrate the performance benefits of the proposed technique relative to existing methods, especially in the low SNR regime and in the presence of noise uncertainties.
Resumo:
We consider carrier frequency offset (CFO) estimation in the context of multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over noisy frequency-selective wireless channels with both single- and multiuser scenarios. We conceived a new approach for parameter estimation by discretizing the continuous-valued CFO parameter into a discrete set of bins and then invoked detection theory, analogous to the minimum-bit-error-ratio optimization framework for detecting the finite-alphabet received signal. Using this radical approach, we propose a novel CFO estimation method and study its performance using both analytical results and Monte Carlo simulations. We obtain expressions for the variance of the CFO estimation error and the resultant BER degradation with the single- user scenario. Our simulations demonstrate that the overall BER performance of a MIMO-OFDM system using the proposed method is substantially improved for all the modulation schemes considered, albeit this is achieved at increased complexity.
Resumo:
The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The calculation of First Passage Time (moreover, even its probability density in time) has so far been generally viewed as an ill-posed problem in the domain of quantum mechanics. The reasons can be summarily seen in the fact that the quantum probabilities in general do not satisfy the Kolmogorov sum rule: the probabilities for entering and non-entering of Feynman paths into a given region of space-time do not in general add up to unity, much owing to the interference of alternative paths. In the present work, it is pointed out that a special case exists (within quantum framework), in which, by design, there exists one and only one available path (i.e., door-way) to mediate the (first) passage -no alternative path to interfere with. Further, it is identified that a popular family of quantum systems - namely the 1d tight binding Hamiltonian systems - falls under this special category. For these model quantum systems, the first passage time distributions are obtained analytically by suitably applying a method originally devised for classical (stochastic) mechanics (by Schroedinger in 1915). This result is interesting especially given the fact that the tight binding models are extensively used in describing everyday phenomena in condense matter physics.
Resumo:
Northeast India and its adjoining areas are characterized by very high seismic activity. According to the Indian seismic code, the region falls under seismic zone V, which represents the highest seismic-hazard level in the country. This region has experienced a number of great earthquakes, such as the Assam (1950) and Shillong (1897) earthquakes, that caused huge devastation in the entire northeast and adjacent areas by flooding, landslides, liquefaction, and damage to roads and buildings. In this study, an attempt has been made to find the probability of occurrence of a major earthquake (M-w > 6) in this region using an updated earthquake catalog collected from different sources. Thereafter, dividing the catalog into six different seismic regions based on different tectonic features and seismogenic factors, the probability of occurrences was estimated using three models: the lognormal, Weibull, and gamma distributions. We calculated the logarithmic probability of the likelihood function (ln L) for all six regions and the entire northeast for all three stochastic models. A higher value of ln L suggests a better model, and a lower value shows a worse model. The results show different model suits for different seismic zones, but the majority follows lognormal, which is better for forecasting magnitude size. According to the results, Weibull shows the highest conditional probabilities among the three models for small as well as large elapsed time T and time intervals t, whereas the lognormal model shows the lowest and the gamma model shows intermediate probabilities. Only for elapsed time T = 0, the lognormal model shows the highest conditional probabilities among the three models at a smaller time interval (t = 3-15 yrs). The opposite result is observed at larger time intervals (t = 15-25 yrs), which show the highest probabilities for the Weibull model. However, based on this study, the IndoBurma Range and Eastern Himalaya show a high probability of occurrence in the 5 yr period 2012-2017 with >90% probability.