965 resultados para Maximum-entropy probability density
Resumo:
We implement conditional moment closure (CMC) for simulation of chemical reactions in laminar chaotic flows. The CMC approach predicts the expected concentration of reactive species, conditional upon the concentration of a corresponding nonreactive scalar. Closure is obtained by neglecting the difference between the local concentration of the reactive scalar and its conditional average. We first use a Monte Carlo method to calculate the evolution of the moments of a conserved scalar; we then reconstruct the corresponding probability density function and dissipation rate. Finally, the concentrations of the reactive scalars are determined. The results are compared (and show excellent agreement) with full numerical simulations of the reaction processes in a chaotic laminar flow. This is a preprint of an article published in AlChE Journal copyright (2007) American Institute of Chemical Engineers: http://www3.interscience.wiley.com/
Resumo:
Se presenta un estudio de detección y caracterización de eventos sísmicos del tipo volcano tectónicos y largo periodo de registros sísmicos generados por el volcán Cotopaxi. La estructura secuencial de detección propuesta permite en un registro sísmico maximizar la probabilidad de presencia de un evento y minimizar la ausencia de este. La detección se la realiza en el dominio del tiempo en cuasi tiempo real manteniendo una tasa constante de falsa alarma para posteriormente realizar un estudio del contenido espectral de los eventos mediante el uso de estimadores espectrales clásicos como el periodograma y paramétricos como el método de máxima entropía de Burg, logrando así, categorizar a los eventos detectados como volcano tectónicos, largo periodo y otros cuando no poseen características pertenecientes a los otros dos tipos como son los rayos.
Resumo:
There are diferent applications in Engineering that require to compute improper integrals of the first kind (integrals defined on an unbounded domain) such as: the work required to move an object from the surface of the earth to in nity (Kynetic Energy), the electric potential created by a charged sphere, the probability density function or the cumulative distribution function in Probability Theory, the values of the Gamma Functions(wich useful to compute the Beta Function used to compute trigonometrical integrals), Laplace and Fourier Transforms (very useful, for example in Differential Equations).
Resumo:
We study the problem of detecting sentences describing adverse drug reactions (ADRs) and frame the problem as binary classification. We investigate different neural network (NN) architectures for ADR classification. In particular, we propose two new neural network models, Convolutional Recurrent Neural Network (CRNN) by concatenating convolutional neural networks with recurrent neural networks, and Convolutional Neural Network with Attention (CNNA) by adding attention weights into convolutional neural networks. We evaluate various NN architectures on a Twitter dataset containing informal language and an Adverse Drug Effects (ADE) dataset constructed by sampling from MEDLINE case reports. Experimental results show that all the NN architectures outperform the traditional maximum entropy classifiers trained from n-grams with different weighting strategies considerably on both datasets. On the Twitter dataset, all the NN architectures perform similarly. But on the ADE dataset, CNN performs better than other more complex CNN variants. Nevertheless, CNNA allows the visualisation of attention weights of words when making classification decisions and hence is more appropriate for the extraction of word subsequences describing ADRs.
Resumo:
Se describe la variante homocigota c.320-2A>G de TGM1 en dos hermanas con ictiosis congénita autosómica recesiva. El clonaje de los transcritos generados por esta variante permitió identificar tres mecanismos moleculares de splicing alternativos.
Resumo:
The scalar Schrödinger equation models the probability density distribution for a particle to be found in a point x given a certain potential V(x) forming a well with respect to a fixed energy level E_0. Formally two real inversion points a,b exist such that V(a)=V(b)=E_0, V(x)<0 in (a,b) and V(x)>0 for xb. Following the work made by D.Yafaev and performing a WKB approximation we obtain solutions defined on specific intervals. The aim of the first part of the thesis is to find a condition on E, which belongs to a neighbourhood of E_0, such that it is an eigenvalue of the Schrödinger operator, obtaining in this way global and linear dependent solutions in L2. In quantum mechanics this condition is known as Bohr-Sommerfeld quantization. In the second part we define a Schrödinger operator referred to two potential wells and we study the quantization conditions on E in order to have a global solution in L2xL2 with respect to the mutual position of the potentials. In particular their wells can be disjoint,can have an intersection, can be included one into the other and can have a single point intersection. For these cases we refer to the works of A.Martinez, S. Fujiié, T. Watanabe, S. Ashida.
Resumo:
In recent decades, an increased interest has been evidenced in the research on multi-scale hierarchical modelling in the field of mechanics, and also in the field of wood products and timber engineering. One of the main motivations for hierar-chical modelling is to understand how properties, composition and structure at lower scale levels may influence and be used to predict the material properties on a macroscopic and structural engineering scale. This chapter presents the applicability of statistic and probabilistic methods, such as the Maximum Likelihood method and Bayesian methods, in the representation of timber’s mechanical properties and its inference accounting to prior information obtained in different importance scales. These methods allow to analyse distinct timber’s reference properties, such as density, bending stiffness and strength, and hierarchically consider information obtained through different non, semi or destructive tests. The basis and fundaments of the methods are described and also recommendations and limitations are discussed. The methods may be used in several contexts, however require an expert’s knowledge to assess the correct statistic fitting and define the correlation arrangement between properties.
Resumo:
The present study on the characterization of probability distributions using the residual entropy function. The concept of entropy is extensively used in literature as a quantitative measure of uncertainty associated with a random phenomenon. The commonly used life time models in reliability Theory are exponential distribution, Pareto distribution, Beta distribution, Weibull distribution and gamma distribution. Several characterization theorems are obtained for the above models using reliability concepts such as failure rate, mean residual life function, vitality function, variance residual life function etc. Most of the works on characterization of distributions in the reliability context centers around the failure rate or the residual life function. The important aspect of interest in the study of entropy is that of locating distributions for which the shannon’s entropy is maximum subject to certain restrictions on the underlying random variable. The geometric vitality function and examine its properties. It is established that the geometric vitality function determines the distribution uniquely. The problem of averaging the residual entropy function is examined, and also the truncated form version of entropies of higher order are defined. In this study it is established that the residual entropy function determines the distribution uniquely and that the constancy of the same is characteristics to the geometric distribution
Resumo:
The problem addressed concerns the determination of the average numberof successive attempts of guessing a word of a certain length consisting of letters withgiven probabilities of occurrence. Both first- and second-order approximations to a naturallanguage are considered. The guessing strategy used is guessing words in decreasing orderof probability. When word and alphabet sizes are large, approximations are necessary inorder to estimate the number of guesses. Several kinds of approximations are discusseddemonstrating moderate requirements regarding both memory and central processing unit(CPU) time. When considering realistic sizes of alphabets and words (100), the numberof guesses can be estimated within minutes with reasonable accuracy (a few percent) andmay therefore constitute an alternative to, e.g., various entropy expressions. For manyprobability distributions, the density of the logarithm of probability products is close to anormal distribution. For those cases, it is possible to derive an analytical expression for theaverage number of guesses. The proportion of guesses needed on average compared to thetotal number decreases almost exponentially with the word length. The leading term in anasymptotic expansion can be used to estimate the number of guesses for large word lengths.Comparisons with analytical lower bounds and entropy expressions are also provided.
Resumo:
The structure of probability currents is studied for the dynamical network after consecutive contraction on two-state, nonequilibrium lattice systems. This procedure allows us to investigate the transition rates between configurations on small clusters and highlights some relevant effects of lattice symmetries on the elementary transitions that are responsible for entropy production. A method is suggested to estimate the entropy production for different levels of approximations (cluster sizes) as demonstrated in the two-dimensional contact process with mutation.
Resumo:
We consider the statistical properties of the local density of states of a one-dimensional Dirac equation in the presence of various types of disorder with Gaussian white-noise distribution. It is shown how either the replica trick or supersymmetry can be used to calculate exactly all the moments of the local density of states.' Careful attention is paid to how the results change if the local density of states is averaged over atomic length scales. For both the replica trick and supersymmetry the problem is reduced to finding the ground state of a zero-dimensional Hamiltonian which is written solely in terms of a pair of coupled spins which are elements of u(1, 1). This ground state is explicitly found for the particular case of the Dirac equation corresponding to an infinite metallic quantum wire with a single conduction channel. The calculated moments of the local density of states agree with those found previously by Al'tshuler and Prigodin [Sov. Phys. JETP 68 (1989) 198] using a technique based on recursion relations for Feynman diagrams. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The extent to which density-dependent processes regulate natural populations is the subject of an ongoing debate. We contribute evidence to this debate showing that density-dependent processes influence the population dynamics of the ectoparasite Aponomma hydrosauri (Acari: Ixodidae), a tick species that infests reptiles in Australia. The first piece of evidence comes from an unusually long-term dataset on the distribution of ticks among individual hosts. If density-dependent processes are influencing either host mortality or vital rates of the parasite population, and those distributions can be approximated with negative binomial distributions, then general host-parasite models predict that the aggregation coefficient of the parasite distribution will increase with the average intensity of infections. We fit negative binomial distributions to the frequency distributions of ticks on hosts, and find that the estimated aggregation coefficient k increases with increasing average tick density. This pattern indirectly implies that one or more vital rates of the tick population must be changing with increasing tick density, because mortality rates of the tick's main host, the sleepy lizard, Tiliqua rugosa, are unaffected by changes in tick burdens. Our second piece of evidence is a re-analysis of experimental data on the attachment success of individual ticks to lizard hosts using generalized linear modelling. The probability of successful engorgement decreases with increasing numbers of ticks attached to a host. This is direct evidence of a density-dependent process that could lead to an increase in the aggregation coefficient of tick distributions described earlier. The population-scale increase in the aggregation coefficient is indirect evidence of a density-dependent process or processes sufficiently strong to produce a population-wide pattern, and thus also likely to influence population regulation. The direct observation of a density-dependent process is evidence of at least part of the responsible mechanism.
Resumo:
We design powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Since optimal constructions for maximum-likelihood decoding do not performwell under iterative decoding, we introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes favorably with multiplexed parallel turbo codes for nonergodic channels.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.