102 resultados para hierarchical entropy
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
We analyze the irreversibility and the entropy production in nonequilibrium interacting particle systems described by a Fokker-Planck equation by the use of a suitable master equation representation. The irreversible character is provided either by nonconservative forces or by the contact with heat baths at distinct temperatures. The expression for the entropy production is deduced from a general definition, which is related to the probability of a trajectory in phase space and its time reversal, that makes no reference a priori to the dissipated power. Our formalism is applied to calculate the heat conductance in a simple system consisting of two Brownian particles each one in contact to a heat reservoir. We show also the connection between the definition of entropy production rate and the Jarzynski equality.
Resumo:
The structure of probability currents is studied for the dynamical network after consecutive contraction on two-state, nonequilibrium lattice systems. This procedure allows us to investigate the transition rates between configurations on small clusters and highlights some relevant effects of lattice symmetries on the elementary transitions that are responsible for entropy production. A method is suggested to estimate the entropy production for different levels of approximations (cluster sizes) as demonstrated in the two-dimensional contact process with mutation.
Resumo:
Using the density matrix renormalization group, we investigate the Renyi entropy of the anisotropic spin-s Heisenberg chains in a z-magnetic field. We considered the half-odd-integer spin-s chains, with s = 1/2, 3/2, and 5/2, and periodic and open boundary conditions. In the case of the spin-1/2 chain we were able to obtain accurate estimates of the new parity exponents p(alpha)((p)) and p(alpha)((o)) that gives the power-law decay of the oscillations of the alpha-Renyi entropy for periodic and open boundary conditions, respectively. We confirm the relations of these exponents with the Luttinger parameter K, as proposed by Calabrese et al. [Phys. Rev. Lett. 104, 095701 (2010)]. Moreover, the predicted periodicity of the oscillating term was also observed for some nonzero values of the magnetization m. We show that for s > 1/2 the amplitudes of the oscillations are quite small and get accurate estimates of p(alpha)((p)) and p(alpha)((o)) become a challenge. Although our estimates of the new universal exponents p(alpha)((p)) and p(alpha)((o)) for the spin-3/2 chain are not so accurate, they are consistent with the theoretical predictions.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This work shows the application of the analytic hierarchy process (AHP) in the full cost accounting (FCA) within the integrated resource planning (IRP) process. For this purpose, a pioneer case was developed and different energy solutions of supply and demand for a metropolitan airport (Congonhas) were considered [Moreira, E.M., 2005. Modelamento energetico para o desenvolvimento limpo de aeroporto metropolitano baseado na filosofia do PIR-O caso da metropole de Sao Paulo. Dissertacao de mestrado, GEPEA/USP]. These solutions were compared and analyzed utilizing the software solution ""Decision Lens"" that implements the AHP. The final part of this work has a classification of resources that can be considered to be the initial target as energy resources, thus facilitating the restraints of the IRP of the airport and setting parameters aiming at sustainable development. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A chemotaxonomic analysis is described of a database containing various types of compounds from the Heliantheae tribe (Asteraceae) using Self-Organizing Maps (SOM). The numbers of occurrences of 9 chemical classes in different taxa of the tribe were used as variables. The study shows that SOM applied to chemical data can contribute to differentiate genera, subtribes, and groups of subtribes (subtribe branches), as well as to tribal and subtribal classifications of Heliantheae, exhibiting a high hit percentage comparable to that of an expert performance, and in agreement with the previous tribe classification proposed by Stuessy.
Resumo:
We analyze the influence of time-, firm-, industry- and country-level determinants of capital structure. First, we apply hierarchical linear modeling in order to assess the relative importance of those levels. We find that time and firm levels explain 78% of firm leverage. Second, we include random intercepts and random coefficients in order to analyze the direct and indirect influences of firm/industry/country characteristics on firm leverage. We document several important indirect influences of variables at industry and country-levels on firm determinants of leverage, as well as several structural differences in the financial behavior between firms of developed and emerging countries. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This study aims to elaborate a hierarchical risk scale (HRS) of agricultural and cattle breeding activities and to classify the main agricultural crops and cattle breeding activities according to their risk levels. The research is characterized as exploratory and quantitative and was based on previous risk assessment (MARKOWITZ, 1952) and capital cost calculation (SHARPE, 1964) work for other business segments. The calculations on agricultural and cattle breeding data were processed for the period from 2000 to 2006. The used methods considers simplifications and adaptations needed to achieve the proposed objective. The final result, pioneering and embryonic, provides support to improve the management of these activities that are so essential to produce food for society.
Resumo:
Understanding the mating patterns of populations of tree species is a key component of ex situ genetic conservation. In this study, we analysed the genetic diversity, spatial genetic structure (SGS) and mating system at the hierarchical levels of fruits and individuals as well as pollen dispersal patterns in a continuous population of Theobroma cacao in Para State, Brazil. A total of 156 individuals in a 0.56 ha plot were mapped and genotyped for nine microsatellite loci. For the mating system analyses, 50 seeds were collected from nine seed trees by sampling five fruits per tree (10 seeds per fruit). Among the 156 individuals, 127 had unique multilocus genotypes, and the remaining were clones. The population was spatially aggregated; it demonstrated a significant SGS up to 15m that could be attributed primarily to the presence of clones. However, the short seed dispersal distance also contributed to this pattern. Population matings occurred mainly via outcrossing, but selfing was observed in some seed trees, which indicated the presence of individual variation for self-incompatibility. The matings were also correlated, especially within ((r) over cap (p(m)) = 0.607) rather than among the fruits ((r) over cap (p(m)) = 0.099), which suggested that a small number of pollen donors fertilised each fruit. The paternity analysis suggested a high proportion of pollen migration (61.3%), although within the plot, most of the pollen dispersal encompassed short distances (28m). The determination of these novel parameters provides the fundamental information required to establish long-term ex situ conservation strategies for this important tropical species. Heredity (2011) 106, 973-985; doi:10.1038/hdy.2010.145; published online 8 December 2010
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Point placement strategies aim at mapping data points represented in higher dimensions to bi-dimensional spaces and are frequently used to visualize relationships amongst data instances. They have been valuable tools for analysis and exploration of data sets of various kinds. Many conventional techniques, however, do not behave well when the number of dimensions is high, such as in the case of documents collections. Later approaches handle that shortcoming, but may cause too much clutter to allow flexible exploration to take place. In this work we present a novel hierarchical point placement technique that is capable of dealing with these problems. While good grouping and separation of data with high similarity is maintained without increasing computation cost, its hierarchical structure lends itself both to exploration in various levels of detail and to handling data in subsets, improving analysis capability and also allowing manipulation of larger data sets.
Resumo:
Several popular Machine Learning techniques are originally designed for the solution of two-class problems. However, several classification problems have more than two classes. One approach to deal with multiclass problems using binary classifiers is to decompose the multiclass problem into multiple binary sub-problems disposed in a binary tree. This approach requires a binary partition of the classes for each node of the tree, which defines the tree structure. This paper presents two algorithms to determine the tree structure taking into account information collected from the used dataset. This approach allows the tree structure to be determined automatically for any multiclass dataset.