885 resultados para Probability Theory and Statistics
Resumo:
So far, low probability differentials for the key schedule of block ciphers have been used as a straightforward proof of security against related-key differential analysis. To achieve resistance, it is believed that for cipher with k-bit key it suffices the upper bound on the probability to be 2− k . Surprisingly, we show that this reasonable assumption is incorrect, and the probability should be (much) lower than 2− k . Our counter example is a related-key differential analysis of the well established block cipher CLEFIA-128. We show that although the key schedule of CLEFIA-128 prevents differentials with a probability higher than 2− 128, the linear part of the key schedule that produces the round keys, and the Feistel structure of the cipher, allow to exploit particularly chosen differentials with a probability as low as 2− 128. CLEFIA-128 has 214 such differentials, which translate to 214 pairs of weak keys. The probability of each differential is too low, but the weak keys have a special structure which allows with a divide-and-conquer approach to gain an advantage of 27 over generic analysis. We exploit the advantage and give a membership test for the weak-key class and provide analysis of the hashing modes. The proposed analysis has been tested with computer experiments on small-scale variants of CLEFIA-128. Our results do not threaten the practical use of CLEFIA.
Resumo:
A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. It is suggested, for the probability distribution of the transfer matrix of the conductor, the distribution of maximum information-entropy, constrained by the following physical requirements: 1) flux conservation, 2) time-reversal invariance and 3) scaling, with the length of the conductor, of the two lowest cumulants of ζ, where = sh2ζ. The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.
Resumo:
A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. For the probability distribution of the transfer matrix R of the conductor we propose a distribution of maximum information entropy, constrained by the following physical requirements: (1) flux conservation, (2) time-reversal invariance, and (3) scaling with the length of the conductor of the two lowest cumulants of ω, where R=exp(iω→⋅Jbhat). The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.
Resumo:
We study the thermoelectric power under classically large magnetic field (TPM) in ultrathin films (UFs), quantum wires (QWs) of non-linear optical materials on the basis of a newly formulated electron dispersion law considering the anisotropies of the effective electron masses, the spin-orbit splitting constants and the presence of the crystal field splitting within the framework of k.p formalism. The results of quantum confined III-V compounds form the special cases of our generalized analysis. The TPM has also been studied for quantum confined II-VI, stressed materials, bismuth and carbon nanotubes (CNs) on the basis of respective dispersion relations. It is found taking quantum confined CdGeAs2, InAs, InSb, CdS, stressed n-InSb and Bi that the TPM increases with increasing film thickness and decreasing electron statistics exhibiting quantized nature for all types of quantum confinement. The TPM in CNs exhibits oscillatory dependence with increasing carrier concentration and the signature of the entirely different types of quantum systems are evident from the plots. Besides, under certain special conditions, all the results for all the materials gets simplified to the well-known expression of the TPM for non-degenerate materials having parabolic energy bands, leading to the compatibility test. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.
Resumo:
Active particles contain internal degrees of freedom with the ability to take in and dissipate energy and, in the process, execute systematic movement. Examples include all living organisms and their motile constituents such as molecular motors. This article reviews recent progress in applying the principles of nonequilibrium statistical mechanics and hydrodynamics to form a systematic theory of the behavior of collections of active particles-active matter-with only minimal regard to microscopic details. A unified view of the many kinds of active matter is presented, encompassing not only living systems but inanimate analogs. Theory and experiment are discussed side by side.
Resumo:
An attempt is made to study the two dimensional (2D) effective electron mass (EEM) in quantum wells (Qws), inversion layers (ILs) and NIPI superlattices of Kane type semiconductors in the presence of strong external photoexcitation on the basis of a newly formulated electron dispersion laws within the framework of k.p. formalism. It has been found, taking InAs and InSb as examples, that the EEM in Qws, ILs and superlattices increases with increasing concentration, light intensity and wavelength of the incident light waves, respectively and the numerical magnitudes in each case is band structure dependent. The EEM in ILs is quantum number dependent exhibiting quantum jumps for specified values of the surface electric field and in NIPI superlattices; the same is the function of Fermi energy and the subband index characterizing such 2D structures. The appearance of the humps of the respective curves is due to the redistribution of the electrons among the quantized energy levels when the quantum numbers corresponding to the highest occupied level changes from one fixed value to the others. Although the EEM varies in various manners with all the variables as evident from all the curves, the rates of variations totally depend on the specific dispersion relation of the particular 2D structure. Under certain limiting conditions, all the results as derived in this paper get transformed into well known formulas of the EEM and the electron statistics in the absence of external photo-excitation and thus confirming the compatibility test. The results of this paper find three applications in the field of microstructures. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.
Resumo:
In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.
The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.
The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.
Resumo:
It is shown that, when expressing arguments in terms of their logarithms, the Laplace transform of a function is related to the antiderivative of this function by a simple convolution. This allows efficient numerical computations of moment generating functions of positive random variables and their inversion. The application of the method is straightforward, apart from the necessity to implement it using high-precision arithmetics. In numerical examples the approach is demonstrated to be particularly useful for distributions with heavy tails, Such as lognormal, Weibull, or Pareto distributions, which are otherwise difficult to handle. The computational efficiency compared to other methods is demonstrated for an M/G/1 queueing problem.
Resumo:
The quick, easy way to master all the statistics you'll ever need The bad news first: if you want a psychology degree you'll need to know statistics. Now for the good news: Psychology Statistics For Dummies. Featuring jargon-free explanations, step-by-step instructions and dozens of real-life examples, Psychology Statistics For Dummies makes the knotty world of statistics a lot less baffling. Rather than padding the text with concepts and procedures irrelevant to the task, the authors focus only on the statistics psychology students need to know. As an alternative to typical, lead-heavy statistics texts or supplements to assigned course reading, this is one book psychology students won't want to be without. Ease into statistics – start out with an introduction to how statistics are used by psychologists, including the types of variables they use and how they measure them Get your feet wet – quickly learn the basics of descriptive statistics, such as central tendency and measures of dispersion, along with common ways of graphically depicting information Meet your new best friend – learn the ins and outs of SPSS, the most popular statistics software package among psychology students, including how to input, manipulate and analyse data Analyse this – get up to speed on statistical analysis core concepts, such as probability and inference, hypothesis testing, distributions, Z-scores and effect sizes Correlate that – get the lowdown on common procedures for defining relationships between variables, including linear regressions, associations between categorical data and more Analyse by inference – master key methods in inferential statistics, including techniques for analysing independent groups designs and repeated-measures research designs Open the book and find: Ways to describe statistical data How to use SPSS statistical software Probability theory and statistical inference Descriptive statistics basics How to test hypotheses Correlations and other relationships between variables Core concepts in statistical analysis for psychology Analysing research designs Learn to: Use SPSS to analyse data Master statistical methods and procedures using psychology-based explanations and examples Create better reports Identify key concepts and pass your course
Resumo:
The Effective Classroom Practice project aimed to identify key factors that contribute to effective teaching in primary and secondary phases of schooling in different socioeconomic contexts. This article addresses the ways in which qualitative and quantitative approaches were combined within an integrated design to provide a comprehensive methodology for the research purposes. Strategies for the study are discussed, followed by the challenges of combining complex statistics with individual stories, particularly in relation to the ongoing iteration between these different data sets, and issues of validity and reliability. The findings shed new light on the meanings and measurement of teachers’ effective classroom practice and the complex nature of, and relationships with, professional life phase, teacher identities, and school context.
Fractional derivatives: probability interpretation and frequency response of rational approximations
Resumo:
The theory of fractional calculus (FC) is a useful mathematical tool in many applied sciences. Nevertheless, only in the last decades researchers were motivated for the adoption of the FC concepts. There are several reasons for this state of affairs, namely the co-existence of different definitions and interpretations, and the necessity of approximation methods for the real time calculation of fractional derivatives (FDs). In a first part, this paper introduces a probabilistic interpretation of the fractional derivative based on the Grünwald-Letnikov definition. In a second part, the calculation of fractional derivatives through Padé fraction approximations is analyzed. It is observed that the probabilistic interpretation and the frequency response of fraction approximations of FDs reveal a clear correlation between both concepts.