16 resultados para graph entropy
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
This paper addresses the functional reliability and the complexity of reconfigurable antennas using graph models. The correlation between complexity and reliability for any given reconfigurable antenna is defined. Two methods are proposed to reduce failures and improve the reliability of reconfigurable antennas. The failures are caused by the reconfiguration technique or by the surrounding environment. These failure reduction methods proposed are tested and examples are given which verify these methods.
Resumo:
The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabasi-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q > 2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).
Discriminating Different Classes of Biological Networks by Analyzing the Graphs Spectra Distribution
Resumo:
The brain's structural and functional systems, protein-protein interaction, and gene networks are examples of biological systems that share some features of complex networks, such as highly connected nodes, modularity, and small-world topology. Recent studies indicate that some pathologies present topological network alterations relative to norms seen in the general population. Therefore, methods to discriminate the processes that generate the different classes of networks (e. g., normal and disease) might be crucial for the diagnosis, prognosis, and treatment of the disease. It is known that several topological properties of a network (graph) can be described by the distribution of the spectrum of its adjacency matrix. Moreover, large networks generated by the same random process have the same spectrum distribution, allowing us to use it as a "fingerprint". Based on this relationship, we introduce and propose the entropy of a graph spectrum to measure the "uncertainty" of a random graph and the Kullback-Leibler and Jensen-Shannon divergences between graph spectra to compare networks. We also introduce general methods for model selection and network model parameter estimation, as well as a statistical procedure to test the nullity of divergence between two classes of complex networks. Finally, we demonstrate the usefulness of the proposed methods by applying them to (1) protein-protein interaction networks of different species and (2) on networks derived from children diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) and typically developing children. We conclude that scale-free networks best describe all the protein-protein interactions. Also, we show that our proposed measures succeeded in the identification of topological changes in the network while other commonly used measures (number of edges, clustering coefficient, average path length) failed.
Resumo:
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiff(max)) for q not equal 1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiff(max) values were capable of distinguish HRV groups (p-values 5.10 x 10(-3); 1.11 x 10(-7), and 5.50 x 10(-7) for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4758815]
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
We present a stochastic approach to nonequilibrium thermodynamics based on the expression of the entropy production rate advanced by Schnakenberg for systems described by a master equation. From the microscopic Schnakenberg expression we get the macroscopic bilinear form for the entropy production rate in terms of fluxes and forces. This is performed by placing the system in contact with two reservoirs with distinct sets of thermodynamic fields and by assuming an appropriate form for the transition rate. The approach is applied to an interacting lattice gas model in contact with two heat and particle reservoirs. On a square lattice, a continuous symmetry breaking phase transition takes place such that at the nonequilibrium ordered phase a heat flow sets in even when the temperatures of the reservoirs are the same. The entropy production rate is found to have a singularity at the critical point of the linear-logarithm type.
Resumo:
Using the density matrix renormalization group, we calculated the finite-size corrections of the entanglement alpha-Renyi entropy of a single interval for several critical quantum chains. We considered models with U(1) symmetry such as the spin-1/2 XXZ and spin-1 Fateev-Zamolodchikov models, as well as models with discrete symmetries such as the Ising, the Blume-Capel, and the three-state Potts models. These corrections contain physically relevant information. Their amplitudes, which depend on the value of a, are related to the dimensions of operators in the conformal field theory governing the long-distance correlations of the critical quantum chains. The obtained results together with earlier exact and numerical ones allow us to formulate some general conjectures about the operator responsible for the leading finite-size correction of the alpha-Renyi entropies. We conjecture that the exponent of the leading finite-size correction of the alpha-Renyi entropies is p(alpha) = 2X(epsilon)/alpha for alpha > 1 and p(1) = nu, where X-epsilon denotes the dimensions of the energy operator of the model and nu = 2 for all the models.
Resumo:
The nonequilibrium stationary state of an irreversible spherical model is investigated on hypercubic lattices. The model is defined by Langevin equations similar to the reversible case, but with asymmetric transition rates. In spite of being irreversible, we have succeeded in finding an explicit form for the stationary probability distribution, which turns out to be of the Boltzmann-Gibbs type. This enables one to evaluate the exact form of the entropy production rate at the stationary state, which is non-zero if the dynamical rules of the transition rates are asymmetric.
Resumo:
In this paper, a new algebraic-graph method for identification of islanding in power system grids is proposed. The proposed method identifies all the possible cases of islanding, due to the loss of a equipment, by means of a factorization of the bus-branch incidence matrix. The main features of this new method include: (i) simple implementation, (ii) high speed, (iii) real-time adaptability, (iv) identification of all islanding cases and (v) identification of the buses that compose each island in case of island formation. The method was successfully tested on large-scale systems such as the reduced south Brazilian system (45 buses/72 branches) and the south-southeast Brazilian system (810 buses/1340 branches). (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Increasing age is associated with a reduction in overall heart rate variability as well as changes in complexity of physiologic dynamics. The aim of this study was to verify if the alterations in autonomic modulation of heart rate caused by the aging process could be detected by Shannon entropy (SE), conditional entropy (CE) and symbolic analysis (SA). Complexity analysis was carried out in 44 healthy subjects divided into two groups: old (n = 23, 63 +/- A 3 years) and young group (n = 21, 23 +/- A 2). It was analyzed SE, CE [complexity index (CI) and normalized CI (NCI)] and SA (0V, 1V, 2LV and 2ULV patterns) during short heart period series (200 cardiac beats) derived from ECG recordings during 15 min of rest in a supine position. The sequences characterized by three heart periods with no significant variations (0V), and that with two significant unlike variations (2ULV) reflect changes in sympathetic and vagal modulation, respectively. The unpaired t test (or Mann-Whitney rank sum test when appropriate) was used in the statistical analysis. In the aging process, the distributions of patterns (SE) remain similar to young subjects. However, the regularity is significantly different; the patterns are more repetitive in the old group (a decrease of CI and NCI). The amounts of pattern types are different: 0V is increased and 2LV and 2ULV are reduced in the old group. These differences indicate marked change of autonomic regulation. The CE and SA are feasible techniques to detect alteration in autonomic control of heart rate in the old group.
Resumo:
We used the statistical measurements of information entropy, disequilibrium and complexity to infer a hierarchy of equations of state for two types of compact stars from the broad class of neutron stars, namely, with hadronic composition and with strange quark composition. Our results show that, since order costs energy. Nature would favor the exotic strange stars even though the question of how to form the strange stars cannot be answered within this approach. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Lattice calculations of the QCD trace anomaly at temperatures T < 160 MeV have been shown to match hadron resonance gas model calculations, which include an exponentially rising hadron mass spectrum. In this paper we perform a more detailed comparison of the model calculations to lattice data that confirms the need for an exponentially increasing density of hadronic states. Also, we find that the lattice data is compatible with a hadron density of states that goes as rho(m) similar to m(-a) exp(m/T-H) at large m with a > 5/2 (where T-H similar to 167 MeV). With this specific subleading contribution to the density of states, heavy resonances are most likely to undergo two-body decay (instead of multiparticle decay), which facilitates their inclusion into hadron transport codes. Moreover, estimates for the shear viscosity and the shear relaxation time coefficient of the hadron resonance model computed within the excluded volume approximation suggest that these transport coefficients are sensitive to the parameters that define the hadron mass spectrum.
Resumo:
We study the Von Neumann and Renyi entanglement entropy of long-range harmonic oscillators (LRHO) by both theoretical and numerical means. We show that the entanglement entropy in massless harmonic oscillators increases logarithmically with the sub-system size as S - c(eff)/3 log l. Although the entanglement entropy of LRHO's shares some similarities with the entanglement entropy at conformal critical points we show that the Renyi entanglement entropy presents some deviations from the expected conformal behaviour. In the massive case we demonstrate that the behaviour of the entanglement entropy with respect to the correlation length is also logarithmic as the short-range case. Copyright (c) EPLA, 2012
Resumo:
Abstract Background Recently, it was realized that the functional connectivity networks estimated from actual brain-imaging technologies (MEG, fMRI and EEG) can be analyzed by means of the graph theory, that is a mathematical representation of a network, which is essentially reduced to nodes and connections between them. Methods We used high-resolution EEG technology to enhance the poor spatial information of the EEG activity on the scalp and it gives a measure of the electrical activity on the cortical surface. Afterwards, we used the Directed Transfer Function (DTF) that is a multivariate spectral measure for the estimation of the directional influences between any given pair of channels in a multivariate dataset. Finally, a graph theoretical approach was used to model the brain networks as graphs. These methods were used to analyze the structure of cortical connectivity during the attempt to move a paralyzed limb in a group (N=5) of spinal cord injured patients and during the movement execution in a group (N=5) of healthy subjects. Results Analysis performed on the cortical networks estimated from the group of normal and SCI patients revealed that both groups present few nodes with a high out-degree value (i.e. outgoing links). This property is valid in the networks estimated for all the frequency bands investigated. In particular, cingulate motor areas (CMAs) ROIs act as ‘‘hubs’’ for the outflow of information in both groups, SCI and healthy. Results also suggest that spinal cord injuries affect the functional architecture of the cortical network sub-serving the volition of motor acts mainly in its local feature property. In particular, a higher local efficiency El can be observed in the SCI patients for three frequency bands, theta (3-6 Hz), alpha (7-12 Hz) and beta (13-29 Hz). By taking into account all the possible pathways between different ROI couples, we were able to separate clearly the network properties of the SCI group from the CTRL group. In particular, we report a sort of compensatory mechanism in the SCI patients for the Theta (3-6 Hz) frequency band, indicating a higher level of “activation” Ω within the cortical network during the motor task. The activation index is directly related to diffusion, a type of dynamics that underlies several biological systems including possible spreading of neuronal activation across several cortical regions. Conclusions The present study aims at demonstrating the possible applications of graph theoretical approaches in the analyses of brain functional connectivity from EEG signals. In particular, the methodological aspects of the i) cortical activity from scalp EEG signals, ii) functional connectivity estimations iii) graph theoretical indexes are emphasized in the present paper to show their impact in a real application.
Resumo:
Background: Prostate cancer is a serious public health problem that affects quality of life and has a significant mortality rate. The aim of the present study was to quantify the fractal dimension and Shannon’s entropy in the histological diagnosis of prostate cancer. Methods: Thirty-four patients with prostate cancer aged 50 to 75 years having been submitted to radical prostatectomy participated in the study. Histological slides of normal (N), hyperplastic (H) and tumor (T) areas of the prostate were digitally photographed with three different magnifications (40x, 100x and 400x) and analyzed. The fractal dimension (FD), Shannon’s entropy (SE) and number of cell nuclei (NCN) in these areas were compared. Results: FD analysis demonstrated the following significant differences between groups: T vs. N and H vs. N groups (p < 0.05) at a magnification of 40x; T vs. N (p < 0.01) at 100x and H vs. N (p < 0.01) at 400x. SE analysis revealed the following significant differences groups: T vs. H and T vs. N (p < 0.05) at 100x; and T vs. H and T vs. N (p < 0.001) at 400x. NCN analysis demonstrated the following significant differences between groups: T vs. H and T vs. N (p < 0.05) at 40x; T vs. H and T vs. N (p < 0.0001) at 100x; and T vs. H and T vs. N (p < 0.01) at 400x. Conclusions: The quantification of the FD and SE, together with the number of cell nuclei, has potential clinical applications in the histological diagnosis of prostate cancer.