935 resultados para Relative entropy
Resumo:
In this work we construct reliable a posteriori estimates for some semi- (spatially) discrete discontinuous Galerkin schemes applied to nonlinear systems of hyperbolic conservation laws. We make use of appropriate reconstructions of the discrete solution together with the relative entropy stability framework, which leads to error control in the case of smooth solutions. The methodology we use is quite general and allows for a posteriori control of discontinuous Galerkin schemes with standard flux choices which appear in the approximation of conservation laws. In addition to the analysis, we conduct some numerical benchmarking to test the robustness of the resultant estimator.
Resumo:
Complex networks obtained from real-world networks are often characterized by incompleteness and noise, consequences of imperfect sampling as well as artifacts in the acquisition process. Because the characterization, analysis and modeling of complex systems underlain by complex networks are critically affected by the quality and completeness of the respective initial structures, it becomes imperative to devise methodologies for identifying and quantifying the effects of the sampling on the network structure. One way to evaluate these effects is through an analysis of the sensitivity of complex network measurements to perturbations in the topology of the network. In this paper, measurement sensibility is quantified in terms of the relative entropy of the respective distributions. Three particularly important kinds of progressive perturbations to the network are considered, namely, edge suppression, addition and rewiring. The measurements allowing the best balance of stability (smaller sensitivity to perturbations) and discriminability (separation between different network topologies) are identified with respect to each type of perturbation. Such an analysis includes eight different measurements applied on six different complex networks models and three real-world networks. This approach allows one to choose the appropriate measurements in order to obtain accurate results for networks where sampling bias cannot be avoided-a very frequent situation in research on complex networks.
Resumo:
Apresenta-se nesta dissertação a proposta de um algoritmo supervisionado de classificação de imagens de sensoreamento remoto, composto de três etapas: remoção ou suavização de nuvens, segmentação e classificação.O método de remoção de nuvens usa filtragem homomórfica para tratar as obstruções causadas pela presença de nuvens suaves e o método Inpainting para remover ou suavizar a preseça de sombras e nuvens densas. Para as etapas de segmentação e classificação é proposto um método baseado na energia AC dos coeficientes da Transformada Cosseno Discreta (DCT). O modo de classificação adotado é do tipo supervisionado. Para avaliar o algioritmo foi usado um banco de 14 imagens captadas por vários sensores, das quais 12 possuem algum tipo de obstrução. Para avaliar a etapa de remoção ou suavização de nuvens e sombras são usados a razão sinal-ruído de pico (PSNR) e o coeficiente Kappa. Nessa fase, vários filtros passa-altas foram comparados para a escolha do mais eficiente. A segmentação das imagens é avaliada pelo método da coincidência entre bordas (EBC) e a classificação é avaliada pela medida da entropia relativa e do erro médio quadrático (MSE). Tão importante quanto as métricas, as imagens resultantes são apresentadas de forma a permitir a avaliação subjetiva por comparação visual. Os resultados mostram a eficiência do algoritmo proposto, principalmente quando comparado ao software Spring, distribuído pelo Instituto Nacional de Pesquisas Espaciais (INPE).
Resumo:
We demonstrate that for every two-qubit state there is a X-counterpart, i.e., a corresponding two-qubit X-state of same spectrum and entanglement, as measured by concurrence, negativity or relative entropy of entanglement. By parametrizing the set of two-qubit X-states and a family of unitary transformations that preserve the sparse structure of a two-qubit X-state density matrix, we obtain the parametric form of a unitary transformation that converts arbitrary two-qubit states into their X-counterparts. Moreover, we provide a semi-analytic prescription on how to set the parameters of this unitary transformation in order to preserve concurrence or negativity. We also explicitly construct a set of X-state density matrices, parametrized by their purity and concurrence, whose elements are in one-to-one correspondence with the points of the concurrence versus purity (CP) diagram for generic two-qubit states. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The aim of this work is to carry out an applicative, comparative and exhaustive study between several entropy based indicators of independence and correlation. We considered some indicators characterized by a wide and consolidate literature, like mutual information, joint entropy, relative entropy or Kullback Leibler distance, and others, more recently introduced, like Granger, Maasoumi and racine entropy, also called Sρ, or utilized in more restricted domains, like Pincus approximate entropy or ApEn. We studied the behaviour of such indicators applying them to binary series. The series was designed to simulate a wide range of situations in order to characterize indicators limit and capability and to identify, case by case, the more useful and trustworthy ones. Our target was not only to study if such indicators were able to discriminate between dependence and independence because, especially for mutual information and Granger, Maasoumi and Racine, that was already demonstrated and reported in literature, but also to verify if and how they were able to provide information about structure, complexity and disorder of the series they were applied to. Special attention was paid on Pincus approximate entropy, that is said by the author to be able to provide information regarding the level of randomness, regularity and complexity of a series. By means of a focused and extensive research, we furthermore tried to clear the meaning of ApEn applied to a couple of different series. In such situation the indicator is named in literature as cross-ApEn. The cross-ApEn meaning and the interpretation of its results is often not simple nor univocal and the matter is scarcely delved into by literature, thereby users can easily leaded up to a misleading conclusion, especially if the indicator is employed, as often unfortunately it happens, in uncritical manner. In order to plug some cross-ApEn gaps and limits clearly brought out during the experimentation, we developed and applied to the already considered cases a further indicator we called “correspondence index”. The correspondence index is perfectly integrated into the cross-ApEn computational algorithm and it is able to provide, at least for binary data, accurate information about the intensity and the direction of an eventual correlation, even not linear, existing between two different series allowing, in the meanwhile, to detect an eventual condition of independence between the series themselves.
Resumo:
The objective of the work is the evaluation of the potential capabilities of navigation satellite signals to retrieve basic atmospheric parameters. A capillary study have been performed on the assumptions more or less explicitly contained in the common processing steps of navigation signals. A probabilistic procedure has been designed for measuring vertical discretised profiles of pressure, temperature and water vapour and their associated errors. Numerical experiments on a synthetic dataset have been performed with the main objective of quantifying the information that could be gained from such approach, using entropy and relative entropy as testing parameters. A simulator of phase delay and bending of a GNSS signal travelling across the atmosphere has been developed to this aim.
Resumo:
Arguably the deepest fact known about the von Neumann entropy, the strong subadditivity inequality is a potent hammer in the quantum information theorist's toolkit. This short tutorial describes a simple proof of strong subadditivity due to Petz [Rep. on Math. Phys. 23 (1), 57-65 (1986)]. It assumes only knowledge of elementary linear algebra and quantum mechanics.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
Social streams have proven to be the mostup-to-date and inclusive information on cur-rent events. In this paper we propose a novelprobabilistic modelling framework, called violence detection model (VDM), which enables the identification of text containing violent content and extraction of violence-related topics over social media data. The proposed VDM model does not require any labeled corpora for training, instead, it only needs the in-corporation of word prior knowledge which captures whether a word indicates violence or not. We propose a novel approach of deriving word prior knowledge using the relative entropy measurement of words based on the in-tuition that low entropy words are indicative of semantically coherent topics and therefore more informative, while high entropy words indicates words whose usage is more topical diverse and therefore less informative. Our proposed VDM model has been evaluated on the TREC Microblog 2011 dataset to identify topics related to violence. Experimental results show that deriving word priors using our proposed relative entropy method is more effective than the widely-used information gain method. Moreover, VDM gives higher violence classification results and produces more coherent violence-related topics compared toa few competitive baselines.
Resumo:
This dissertation concerns the well-posedness of the Navier-Stokes-Smoluchowski system. The system models a mixture of fluid and particles in the so-called bubbling regime. The compressible Navier-Stokes equations governing the evolution of the fluid are coupled to the Smoluchowski equation for the particle density at a continuum level. First, working on fixed domains, the existence of weak solutions is established using a three-level approximation scheme and based largely on the Lions-Feireisl theory of compressible fluids. The system is then posed over a moving domain. By utilizing a Brinkman-type penalization as well as penalization of the viscosity, the existence of weak solutions of the Navier-Stokes-Smoluchowski system is proved over moving domains. As a corollary the convergence of the Brinkman penalization is proved. Finally, a suitable relative entropy is defined. This relative entropy is used to establish a weak-strong uniqueness result for the Navier-Stokes-Smoluchowski system over moving domains, ensuring that strong solutions are unique in the class of weak solutions.
Resumo:
In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms which propel biological evolution. Our previous article presented a histogram model [1] consisting in populations of individuals whose number changed under the influence of variation and/or fitness, the total population remaining constant. Individuals are classified into bins, and the content of each bin is calculated generation after generation by an Excel spreadsheet. Here, we apply the histogram model to a stable population with fitness F(1)=1.00 in which one or two fitter mutants emerge. In a first scenario, a single mutant emerged in the population whose fitness was greater than 1.00. The simulations ended when the original population was reduced to a single individual. The histogram model was validated by excellent agreement between its predictions and those of a classical continuous function (Eqn. 1) which predicts the number of generations needed for a favorable mutation to spread throughout a population. But in contrast to Eqn. 1, our histogram model is adaptable to more complex scenarios, as demonstrated here. In the second and third scenarios, the original population was present at time zero together with two mutants which differed from the original population by two higher and distinct fitness values. In the fourth scenario, the large original population was present at time zero together with one fitter mutant. After a number of generations, when the mutant offspring had multiplied, a second mutant was introduced whose fitness was even greater. The histogram model also allows Shannon entropy (SE) to be monitored continuously as the information content of the total population decreases or increases. The results of these simulations illustrate, in a graphically didactic manner, the influence of natural selection, operating through relative fitness, in the emergence and dominance of a fitter mutant.
Resumo:
We present a study of the influence of atomic order on the relative stability of the bcc and the 18R martensitic structures in a Cu2.96Al0.92Be0.12 crystal. Calorimetric measurements have shown that disorder increases the stability of the 18R phase, contrary to what happens in Cu-Zn-Al alloys for which it is the bcc phase that is stabilized by disordering the system. This different behavior has been explained in terms of a model recently reported. We have also proved that the entropy change at the martensitic transition is independent of the state of atomic order of the crystal, as predicted theoretically. Our results suggest that differences in the vibrational spectrum of the crystal due to different states of atomic order must be equal in the bcc and in the close-packed phases.
Resumo:
This paper uses an entropy-based information approach to determine if farmland values are more closely associated with urban pressure or farm income. The basic question is: how much information on changes in farm real estate values is contained in changes in population versus changes in returns to production agriculture? Results suggest population is informative, but changes in farmland values are more strongly associated with changes in the distribution of returns. However, this relationship is not true for every region nor does it hold over time, as for some regions and time periods changes in population are more informative. Results have policy implications for both equity and efficiency.
Resumo:
Transgenerational inheritance of abiotic stress-induced epigenetic modifications in plants has potential adaptive significance and might condition the offspring to improve the response to the same stress, but this is at least partly dependent on the potency, penetrance and persistence of the transmitted epigenetic marks. We examined transgenerational inheritance of low Relative Humidity-induced DNA methylation for two gene loci in the stomatal developmental pathway in Arabidopsis thaliana and the abundance of associated short-interfering RNAs (siRNAs). Heritability of low humidity-induced methylation was more predictable and penetrative at one locus (SPEECHLESS, entropy ≤ 0.02; χ2 < 0.001) than the other (FAMA, entropy ≤ 0.17; χ2 ns). Methylation at SPEECHLESS correlated positively with the continued presence of local siRNAs (r2 = 0.87; p = 0.013) which, however, could be disrupted globally in the progeny under repeated stress. Transgenerational methylation and a parental low humidity-induced stomatal phenotype were heritable, but this was reversed in the progeny under repeated treatment in a previously unsuspected manner.