924 resultados para Shannon´s entropy
Resumo:
Renyi and von Neumann entropies quantifying the amount of entanglement in ground states of critical spin chains are known to satisfy a universal law which is given by the conformal field theory (CFT) describing their scaling regime. This law can be generalized to excitations described by primary fields in CFT, as was done by Alcaraz et al in 2011 (see reference [1], of which this work is a completion). An alternative derivation is presented, together with numerical verifications of our results in different models belonging to the c = 1, 1/2 universality classes. Oscillations of the Renyi entropy in excited states are also discussed.
Resumo:
We analyzed the effectiveness of linear short- and long-term variability time domain parameters, an index of sympatho-vagal balance (SDNN/RMSSD) and entropy in differentiating fetal heart rate patterns (fHRPs) on the fetal heart rate (fHR) series of 5, 3 and 2 min duration reconstructed from 46 fetal magnetocardiograms. Gestational age (GA) varied from 21 to 38 weeks. FHRPs were classified based on the fHR standard deviation. In sleep states, we observed that vagal influence increased with GA, and entropy significantly increased (decreased) with GA (SDNN/RMSSD), demonstrating that a prevalence of vagal activity with autonomous nervous system maturation may be associated with increased sleep state complexity. In active wakefulness, we observed a significant negative (positive) correlation of short-term (long-term) variability parameters with SDNN/RMSSD. ANOVA statistics demonstrated that long-term irregularity and standard deviation of normal-to-normal beat intervals (SDNN) best differentiated among fHRPs. Our results confirm that short-and long-term variability parameters are useful to differentiate between quiet and active states, and that entropy improves the characterization of sleep states. All measures differentiated fHRPs more effectively on very short HR series, as a result of the fMCG high temporal resolution and of the intrinsic timescales of the events that originate the different fHRPs.
Resumo:
The present paper presents a theoretical analysis of a cross flow heat exchanger with a new flow arrangement comprehending several tube rows. The thermal performance of the proposed flow arrangement is compared with the thermal performance of a typical counter cross flow arrangement that is used in chemical, refrigeration, automotive and air conditioning industries. The thermal performance comparison has been performed in terms of the following parameters: heat exchanger effectiveness and efficiency, dimensionless entropy generation, entransy dissipation number, and dimensionless local temperature differences. It is also shown that the uniformity of the temperature difference field leads to a higher thermal performance of the heat exchanger. In the present case this is accomplished thorough a different organization of the in-tube fluid circuits in the heat exchanger. The relation between the recently introduced "entransy dissipation number" and the conventional thermal effectiveness has been obtained in terms of the "number of transfer units". A case study has been solved to quantitatively to obtain the temperature difference distribution over two rows units involving the proposed arrangement and the counter cross flow one. It has been shown that the proposed arrangement presents better thermal performance regardless the comparison parameter. (C) 2012 Elsevier Masson SAS. All rights reserved.
Resumo:
Exact results on particle densities as well as correlators in two models of immobile particles, containing either a single species or else two distinct species, are derived. The models evolve following a descent dynamics through pair annihilation where each particle interacts once at most throughout its entire history. The resulting large number of stationary states leads to a non-vanishing configurational entropy. Our results are established for arbitrary initial conditions and are derived via a generating function method. The single-species model is the dual of the 1D zero-temperature kinetic Ising model with Kimball-Deker-Haake dynamics. In this way, both in finite and semi-infinite chains and also the Bethe lattice can be analysed. The relationship with the random sequential adsorption of dimers and weakly tapped granular materials is discussed.
Resumo:
We discuss a new interacting model for the cosmological dark sector in which the attenuated dilution of cold dark matter scales as a(-3)f(a), where f(a) is an arbitrary function of the cosmic scale factor a. From thermodynamic arguments, we show that f(a) is proportional to the entropy source of the particle creation process. In order to investigate the cosmological consequences of this kind of interacting models, we expand f(a) in a power series, and viable cosmological solutions are obtained. Finally, we use current observational data to place constraints on the interacting function f(a).
Resumo:
We investigate how the initial geometry of a heavy-ion collision is transformed into final flow observables by solving event-by-event ideal hydrodynamics with realistic fluctuating initial conditions. We study quantitatively to what extent anisotropic flow (nu(n)) is determined by the initial eccentricity epsilon(n) for a set of realistic simulations, and we discuss which definition of epsilon(n) gives the best estimator of nu(n). We find that the common practice of using an r(2) weight in the definition of epsilon(n) in general results in a poorer predictor of nu(n) than when using r(n) weight, for n > 2. We similarly study the importance of additional properties of the initial state. For example, we show that in order to correctly predict nu(4) and nu(5) for noncentral collisions, one must take into account nonlinear terms proportional to epsilon(2)(2) and epsilon(2)epsilon(3), respectively. We find that it makes no difference whether one calculates the eccentricities over a range of rapidity or in a single slice at z = 0, nor is it important whether one uses an energy or entropy density weight. This knowledge will be important for making a more direct link between experimental observables and hydrodynamic initial conditions, the latter being poorly constrained at present.
Resumo:
In this work, we probe the stability of a z = 3 three-dimensional Lifshitz black hole by using scalar and spinorial perturbations. We found an analytical expression for the quasinormal frequencies of the scalar probe field, which perfectly agree with the behavior of the quasinormal modes obtained numerically. The results for the numerical analysis of the spinorial perturbations reinforce the conclusion of the scalar analysis, i.e., the model is stable under scalar and spinor perturbations. As an application we found the area spectrum of the Lifshitz black hole, which turns out to be equally spaced.
Resumo:
Native bees are important providers of pollination services, but there are cumulative evidences of their decline. Global changes such as habitat losses, invasions of exotic species and climate change have been suggested as the main causes of the decline of pollinators. In this study, the influence of climate change on the distribution of 10 species of Brazilian bees was estimated with species distribution modelling. We used Maxent algorithm (maximum entropy) and two different scenarios, an optimistic and a pessimistic, to the years 2050 and 2080. We also evaluated the percentage reduction of species habitat based on the future scenarios of climate change through Geographic Information System (GIS). Results showed that the total area of suitable habitats decreased for all species but one under the different future scenarios. The greatest reductions in habitat area were found for Melipona bicolor bicolor and Melipona scutellaris, which occur predominantly in areas related originally to Atlantic Moist Forest. The species analysed have been reported to be pollinators of some regional crops and the consequence of their decrease for these crops needs further clarification. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A chaotic encryption algorithm is proposed based on the "Life-like" cellular automata (CA), which acts as a pseudo-random generator (PRNG). The paper main focus is to use chaos theory to cryptography. Thus, CA was explored to look for this "chaos" property. This way, the manuscript is more concerning on tests like: Lyapunov exponent, Entropy and Hamming distance to measure the chaos in CA, as well as statistic analysis like DIEHARD and ENT suites. Our results achieved higher randomness quality than others ciphers in literature. These results reinforce the supposition of a strong relationship between chaos and the randomness quality. Thus, the "chaos" property of CA is a good reason to be employed in cryptography, furthermore, for its simplicity, low cost of implementation and respectable encryption power. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background Prostate cancer is a leading cause of death in the male population, therefore, a comprehensive study about the genes and the molecular networks involved in the tumoral prostate process becomes necessary. In order to understand the biological process behind potential biomarkers, we have analyzed a set of 57 cDNA microarrays containing ~25,000 genes. Results Principal Component Analysis (PCA) combined with the Maximum-entropy Linear Discriminant Analysis (MLDA) were applied in order to identify genes with the most discriminative information between normal and tumoral prostatic tissues. Data analysis was carried out using three different approaches, namely: (i) differences in gene expression levels between normal and tumoral conditions from an univariate point of view; (ii) in a multivariate fashion using MLDA; and (iii) with a dependence network approach. Our results show that malignant transformation in the prostatic tissue is more related to functional connectivity changes in their dependence networks than to differential gene expression. The MYLK, KLK2, KLK3, HAN11, LTF, CSRP1 and TGM4 genes presented significant changes in their functional connectivity between normal and tumoral conditions and were also classified as the top seven most informative genes for the prostate cancer genesis process by our discriminant analysis. Moreover, among the identified genes we found classically known biomarkers and genes which are closely related to tumoral prostate, such as KLK3 and KLK2 and several other potential ones. Conclusion We have demonstrated that changes in functional connectivity may be implicit in the biological process which renders some genes more informative to discriminate between normal and tumoral conditions. Using the proposed method, namely, MLDA, in order to analyze the multivariate characteristic of genes, it was possible to capture the changes in dependence networks which are related to cell transformation.
Resumo:
Abstract Background Decreased heart rate variability (HRV) is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD) and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences]) and frequency domains ultra-low frequency (ULF) ≤ 0,003 Hz, very low frequency (VLF) 0,003 – 0,04 Hz, low frequency (LF) (0.04–0.15 Hz), and high frequency (HF) (0.15–0.40 Hz) as well as the ratio between LF and HF components (LF/HF). In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn), α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC). The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.
Resumo:
This work reports aspects of seed germination at different temperatures of Adenanthera pavonina L., a woody Southeast Asian Leguminosae. Germination was studied by measuring the final percentages, the rate, the rate variance and the synchronisation of the individual seeds calculated by the minimal informational entropy of frequencies distribution of seed germination. Overlapping the germinability range with the range for the highest values of germination rates and the minimal informational entropy of frequencies distribution of seed germination, we found that the best temperature for the germination of A. pavonina seeds is 35 ºC. The slope µ of the Arrhenius plot of the germination rates is positive for T < 35 ºC and negative for T > 35 ºC. The activation enthalpies, estimated from closely-spaced points, shows that |ΔH-| < 12 Cal mol-1 occur for temperatures in the range between 25 ºC and 40 ºC. The ecological implication of these results are that this species may germinate very fast in tropical areas during the summer season. This may be an advantage to the establishment of this species under the climatic conditions in those areas.
Resumo:
Although the hydrophobicity is usually an arduous parameter to be determined in the field, it has been pointed out as a good option to monitor aging of polymeric outdoor insulators. Concerning this purpose, digital image processing of photos taken from wet insulators has been the main technique nowadays. However, important challenges on this technique still remain to be overcome, such as; images from non-controlled illumination conditions can interfere on analyses and no existence of standard surfaces with different levels of hydrophobicity. In this paper, the photo image samples were digitally filtered to reduce the illumination influence, and hydrophobic surface samples were prepared from wetting silicon surfaces with solution of water-alcohol. Furthermore norevious studies triying to quantify and relate these properties in a mathematical function were found, that could be used in the field by the electrical companies. Based on such considerations, high quality images of countless hydrophobic surfaces were obtained and three different image processing methodologies, the fractal dimension and two Haralick textures descriptors, entropy and homogeneity, associated with several digital filters, were compared. The entropy parameter Haralick's descriptors filtered with the White Top-Hat filter presented the best result to classify the hydrophobicity.
Resumo:
Exergetic analysis can provide useful information as it enables the identification of irreversible phenomena bringing about entropy generation and, therefore, exergy losses (also referred to as irreversibilities). As far as human thermal comfort is concerned, irreversibilities can be evaluated based on parameters related to both the occupant and his surroundings. As an attempt to suggest more insights for the exergetic analysis of thermal comfort, this paper calculates irreversibility rates for a sitting person wearing fairly light clothes and subjected to combinations of ambient air and mean radiant temperatures. The thermodynamic model framework relies on the so-called conceptual energy balance equation together with empirical correlations for invoked thermoregulatory heat transfer rates adapted for a clothed body. Results suggested that a minimum irreversibility rate may exist for particular combinations of the aforesaid surrounding temperatures. By separately considering the contribution of each thermoregulatory mechanism, the total irreversibility rate rendered itself more responsive to either convective or radiative clothing-influenced heat transfers, with exergy losses becoming lower if the body is able to transfer more heat (to the ambient) via convection.