932 resultados para Information dispersal algorithm
Resumo:
The parallel mutation-selection evolutionary dynamics, in which mutation and replication are independent events, is solved exactly in the case that the Malthusian fitnesses associated to the genomes are described by the random energy model (REM) and by a ferromagnetic version of the REM. The solution method uses the mapping of the evolutionary dynamics into a quantum Ising chain in a transverse field and the Suzuki-Trotter formalism to calculate the transition probabilities between configurations at different times. We find that in the case of the REM landscape the dynamics can exhibit three distinct regimes: pure diffusion or stasis for short times, depending on the fitness of the initial configuration, and a spin-glass regime for large times. The dynamic transition between these dynamical regimes is marked by discontinuities in the mean-fitness as well as in the overlap with the initial reference sequence. The relaxation to equilibrium is described by an inverse time decay. In the ferromagnetic REM, we find in addition to these three regimes, a ferromagnetic regime where the overlap and the mean-fitness are frozen. In this case, the system relaxes to equilibrium in a finite time. The relevance of our results to information processing aspects of evolution is discussed.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
Context tree models have been introduced by Rissanen in [25] as a parsimonious generalization of Markov models. Since then, they have been widely used in applied probability and statistics. The present paper investigates non-asymptotic properties of two popular procedures of context tree estimation: Rissanen's algorithm Context and penalized maximum likelihood. First showing how they are related, we prove finite horizon bounds for the probability of over- and under-estimation. Concerning overestimation, no boundedness or loss-of-memory conditions are required: the proof relies on new deviation inequalities for empirical probabilities of independent interest. The under-estimation properties rely on classical hypotheses for processes of infinite memory. These results improve on and generalize the bounds obtained in Duarte et al. (2006) [12], Galves et al. (2008) [18], Galves and Leonardi (2008) [17], Leonardi (2010) [22], refining asymptotic results of Buhlmann and Wyner (1999) [4] and Csiszar and Talata (2006) [9]. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
It has been suggested that dispersal of seeds of Coussapoa asperifolia magnifolia could have endozoochoric dispersal by frugivorous birds and monkeys because the fruits are red when ripe, or exozoochoric dispersal, because the exocarp is mucilaginous and sticky. However, our field observations showed only stingless bees collecting the exocarp with seeds of C. asperifolia magnifolia, which are used for building and repairing their nests, from which the plants sprout. This paper aimed to determine the fruit chemical composition, since we postulated that C. asperifolia magnifolia is neither consumed by birds nor monkeys due to being very sticky and apparently resinous. Apolar extract analyses revealed the fruits are not resinous but extremely rich in waxes ( mainly esterified triglycerides), and polar extract analyses revealed the sugar content to be close to the sensorial minimum level. This probably accounts for why only stingless bees are seen visiting fruits and dispersing seeds.
Resumo:
A novel flow-based strategy for implementing simultaneous determinations of different chemical species reacting with the same reagent(s) at different rates is proposed and applied to the spectrophotometric catalytic determination of iron and vanadium in Fe-V alloys. The method relies on the influence of Fe(II) and V(IV) on the rate of the iodide oxidation by Cr(VI) under acidic conditions, the Jones reducing agent is then needed Three different plugs of the sample are sequentially inserted into an acidic KI reagent carrier stream, and a confluent Cr(VI) solution is added downstream Overlap between the inserted plugs leads to a complex sample zone with several regions of maximal and minimal absorbance values. Measurements performed on these regions reveal the different degrees of reaction development and tend to be more precise Data are treated by multivariate calibration involving the PLS algorithm The proposed system is very simple and rugged Two latent variables carried out ca 95% of the analytical information and the results are in agreement with ICP-OES. (C) 2010 Elsevier B V. All rights reserved.
Resumo:
In this work we study an agent based model to investigate the role of asymmetric information degrees for market evolution. This model is quite simple and may be treated analytically since the consumers evaluate the quality of a certain good taking into account only the quality of the last good purchased plus her perceptive capacity beta. As a consequence, the system evolves according to a stationary Markov chain. The value of a good offered by the firms increases along with quality according to an exponent alpha, which is a measure of the technology. It incorporates all the technological capacity of the production systems such as education, scientific development and techniques that change the productivity rates. The technological level plays an important role to explain how the asymmetry of information may affect the market evolution in this model. We observe that, for high technological levels, the market can detect adverse selection. The model allows us to compute the maximum asymmetric information degree before the market collapses. Below this critical point the market evolves during a limited period of time and then dies out completely. When beta is closer to 1 (symmetric information), the market becomes more profitable for high quality goods, although high and low quality markets coexist. The maximum asymmetric information level is a consequence of an ergodicity breakdown in the process of quality evaluation. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This article discusses issues related to the organization and reception of information in the context of services and public information systems driven by technology. It stems from the assumption that in a ""technologized"" society, the distance between users and information is almost always of cognitive and socio-cultural nature, a product of our effort to design communication. In this context, we favor the approach of the information sign, seeking to answer how a documentary message turns into information, i.e. a structure recognized as socially useful. Observing the structural, cognitive and communicative aspects of the documentary message, based on Documentary Linguistics, Terminology, as well as on Textual Linguistics, the policy of knowledge management and innovation of the Government of the State of Sao Paulo is analyzed, which authorizes the use of Web 2.0, also questioning to what extent this initiative represents innovation in the environment of libraries.
Resumo:
Assuming as a starting point the acknowledge that the principles and methods used to build and manage the documentary systems are disperse and lack systematization, this study hypothesizes that the notion of structure, when assuming mutual relationships among its elements, promotes more organical systems and assures better quality and consistency in the retrieval of information concerning users` matters. Accordingly, it aims to explore the fundamentals about the records of information and documentary systems, starting from the notion of structure. In order to achieve that, it presents basic concepts and relative matters to documentary systems and information records. Next to this, it lists the theoretical subsides over the notion of structure, studied by Benveniste, Ferrater Mora, Levi-Strauss, Lopes, Penalver Simo, Saussure, apart from Ducrot, Favero and Koch. Appropriations that have already been done by Paul Otlet, Garcia Gutierrez and Moreiro Gonzalez. In Documentation come as a further topic. It concludes that the adopted notion of structure to make explicit a hypothesis of real systematization achieves more organical systems, as well as it grants pedagogical reference to the documentary tasks.
Resumo:
The aim of this paper is to highlight some of the methods of imagetic information representation, reviewing the literature of the area and proposing a model of methodology adapted to Brazilian museums. An elaboration of a methodology of imagetic information representation is developed based on Brazilian characteristics of information treatment in order to adapt it to museums. Finally, spreadsheets that show this methodology are presented.
Resumo:
A study was designed to determine how the degree programs in Information and library science available in 2000-2005 at the public universities of Madrid fit the tabour market needs of their students. The methodology used was the development of a questionnaire addressed to graduates. Although the number of surveys completed is not high (118), the authors believe that the results obtained permit a series of conclusions that may be extrapolated to the entire cohort.
Resumo:
The project of Information Architecture is one of the initial stages of the project of a website, thus the detection and correction of errors in this stage are easier and time-saving than in the following stages. However, to minimize errors for the projects of information architecture, a methodology is necessary to organize the work of the professional and guarantee the final product quality. The profile of the professional who works with Information Architecture in Brazil has been analyzed (quantitative research by means of a questionnaire on-line) as well as the difficulties, techniques and methodologies found in his projects (qualitative research by means of interviews in depth with support of the approaches of the Sense-Making). One concludes that the methodologies of projects of information architecture need to develop the adoption of the approaches of Design Centered in the User and in the ways to evaluate its results.
Resumo:
This article deals with the activity of defining information of hospital systems as fundamental for choosing the type of information systems to be used and also the organizational level to be supported. The use of hospital managing information systems improves the user`s decision -making process by allowing control report generation and following up the procedures made in the hospital as well.
Resumo:
This text aims to approach museums` role in the production of knowledge and how objects are transformed into documents when museums incorporate them. On accepting the effects of such transformation, museums start working not only with material goods, but also symbolic goods. The collection manager or exhibition curator communicate through documents rather than bringing into light its intrinsic content. In this sense, every process involving museum documents, from the selection of collections to exhibitions, has a rhetoric and ideological nature which is given. Museums must search for meanings through correlations established in the process of producing information. Exhibitions should present objects in multiple contexts, giving visitors the opportunity to participate and attribute their own meanings to them.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.