974 resultados para Boolean Functions, Equivalence Class
Resumo:
The generating functional method is employed to investigate the synchronous dynamics of Boolean networks, providing an exact result for the system dynamics via a set of macroscopic order parameters. The topology of the networks studied and its constituent Boolean functions represent the system's quenched disorder and are sampled from a given distribution. The framework accommodates a variety of topologies and Boolean function distributions and can be used to study both the noisy and noiseless regimes; it enables one to calculate correlation functions at different times that are inaccessible via commonly used approximations. It is also used to determine conditions for the annealed approximation to be valid, explore phases of the system under different levels of noise and obtain results for models with strong memory effects, where existing approximations break down. Links between Boolean networks and general Boolean formulas are identified and results common to both system types are highlighted. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
Computing circuits composed of noisy logical gates and their ability to represent arbitrary Boolean functions with a given level of error are investigated within a statistical mechanics setting. Existing bounds on their performance are straightforwardly retrieved, generalized, and identified as the corresponding typical-case phase transitions. Results on error rates, function depth, and sensitivity, and their dependence on the gate-type and noise model used are also obtained.
Resumo:
The dynamics of Boolean networks (BN) with quenched disorder and thermal noise is studied via the generating functional method. A general formulation, suitable for BN with any distribution of Boolean functions, is developed. It provides exact solutions and insight into the evolution of order parameters and properties of the stationary states, which are inaccessible via existing methodology. We identify cases where the commonly used annealed approximation is valid and others where it breaks down. Broader links between BN and general Boolean formulas are highlighted.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
In physics, one attempts to infer the rules governing a system given only the results of imperfect measurements. Hence, microscopic theories may be effectively indistinguishable experimentally. We develop an operationally motivated procedure to identify the corresponding equivalence classes of states, and argue that the renormalization group (RG) arises from the inherent ambiguities associated with the classes: one encounters flow parameters as, e.g., a regulator, a scale, or a measure of precision, which specify representatives in a given equivalence class. This provides a unifying framework and reveals the role played by information in renormalization. We validate this idea by showing that it justifies the use of low-momenta n-point functions as statistically relevant observables around a Gaussian hypothesis. These results enable the calculation of distinguishability in quantum field theory. Our methods also provide a way to extend renormalization techniques to effective models which are not based on the usual quantum-field formalism, and elucidates the relationships between various type of RG.
Resumo:
We study the relations of shift equivalence and strong shift equivalence for matrices over a ring $\mathcal{R}$, and establish a connection between these relations and algebraic K-theory. We utilize this connection to obtain results in two areas where the shift and strong shift equivalence relations play an important role: the study of finite group extensions of shifts of finite type, and the Generalized Spectral Conjectures of Boyle and Handelman for nonnegative matrices over subrings of the real numbers. We show the refinement of the shift equivalence class of a matrix $A$ over a ring $\mathcal{R}$ by strong shift equivalence classes over the ring is classified by a quotient $NK_{1}(\mathcal{R}) / E(A,\mathcal{R})$ of the algebraic K-group $NK_{1}(\calR)$. We use the K-theory of non-commutative localizations to show that in certain cases the subgroup $E(A,\mathcal{R})$ must vanish, including the case $A$ is invertible over $\mathcal{R}$. We use the K-theory connection to clarify the structure of algebraic invariants for finite group extensions of shifts of finite type. In particular, we give a strong negative answer to a question of Parry, who asked whether the dynamical zeta function determines up to finitely many topological conjugacy classes the extensions by $G$ of a fixed mixing shift of finite type. We apply the K-theory connection to prove the equivalence of a strong and weak form of the Generalized Spectral Conjecture of Boyle and Handelman for primitive matrices over subrings of $\mathbb{R}$. We construct explicit matrices whose class in the algebraic K-group $NK_{1}(\mathcal{R})$ is non-zero for certain rings $\mathcal{R}$ motivated by applications. We study the possible dynamics of the restriction of a homeomorphism of a compact manifold to an isolated zero-dimensional set. We prove that for $n \ge 3$ every compact zero-dimensional system can arise as an isolated invariant set for a homeomorphism of a compact $n$-manifold. In dimension two, we provide obstructions and examples.
Resumo:
Given a bent function f (x) of n variables, its max-weight and min-weight functions are introduced as the Boolean functions f + (x) and f − (x) whose supports are the sets {a ∈ Fn2 | w( f ⊕la) = 2n−1+2 n 2 −1} and {a ∈ Fn2 | w( f ⊕la) = 2n−1−2 n 2 −1} respectively, where w( f ⊕ la) denotes the Hamming weight of the Boolean function f (x) ⊕ la(x) and la(x) is the linear function defined by a ∈ Fn2 . f + (x) and f − (x) are proved to be bent functions. Furthermore, combining the 4 minterms of 2 variables with the max-weight or min-weight functions of a 4-tuple ( f0(x), f1(x), f2(x), f3(x)) of bent functions of n variables such that f0(x) ⊕ f1(x) ⊕ f2(x) ⊕ f3(x) = 1, a bent function of n + 2 variables is obtained. A family of 4-tuples of bent functions satisfying the above condition is introduced, and finally, the number of bent functions we can construct using the method introduced in this paper are obtained. Also, our construction is compared with other constructions of bent functions.
Resumo:
Efficient hill climbers have been recently proposed for single- and multi-objective pseudo-Boolean optimization problems. For $k$-bounded pseudo-Boolean functions where each variable appears in at most a constant number of subfunctions, it has been theoretically proven that the neighborhood of a solution can be explored in constant time. These hill climbers, combined with a high-level exploration strategy, have shown to improve state of the art methods in experimental studies and open the door to the so-called Gray Box Optimization, where part, but not all, of the details of the objective functions are used to better explore the search space. One important limitation of all the previous proposals is that they can only be applied to unconstrained pseudo-Boolean optimization problems. In this work, we address the constrained case for multi-objective $k$-bounded pseudo-Boolean optimization problems. We find that adding constraints to the pseudo-Boolean problem has a linear computational cost in the hill climber.
Resumo:
There is an urgent need to make drug discovery cheaper and faster. This will enable the development of treatments for diseases currently neglected for economic reasons, such as tropical and orphan diseases, and generally increase the supply of new drugs. Here, we report the Robot Scientist 'Eve' designed to make drug discovery more economical. A Robot Scientist is a laboratory automation system that uses artificial intelligence (AI) techniques to discover scientific knowledge through cycles of experimentation. Eve integrates and automates library-screening, hit-confirmation, and lead generation through cycles of quantitative structure activity relationship learning and testing. Using econometric modelling we demonstrate that the use of AI to select compounds economically outperforms standard drug screening. For further efficiency Eve uses a standardized form of assay to compute Boolean functions of compound properties. These assays can be quickly and cheaply engineered using synthetic biology, enabling more targets to be assayed for a given budget. Eve has repositioned several drugs against specific targets in parasites that cause tropical diseases. One validated discovery is that the anti-cancer compound TNP-470 is a potent inhibitor of dihydrofolate reductase from the malaria-causing parasite Plasmodium vivax.
Resumo:
A expansão da tríplice continência em unidades com quatro ou mais elementos abriu novas perspectivas para a compreensão de comportamentos complexos, como a emergência de respostas que derivam da formação de classes de estímulos equivalentes e que modelam comportamentos simbólicos e conceituais. Na investigação experimental, o procedimento de matching to sample tem sido frequentemente empregado para estabelecer discriminações condicionais. Em particular, a obtenção do matching de identidade generalizado é considerada demonstrativa da aquisição dos conceitos de igualdade e diferença. Segundo argumentamos, o fato de se buscar a compreensão desses conceitos a partir de processos discriminativos condicionais pode ter sido responsável pelos frequentes fracassos em demonstrá-los em sujeitos não humanos. A falta de correspondência entre os processos discriminativos responsáveis por estabelecer a relação de reflexividade entre estímulos que formam classes equivalentes e o matching de identidade generalizado, nesse sentido, é aqui revista ao longo de estudos empíricos e discutida com respeito às suas implicações.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, motivated by the interest and relevance of the study of tumor growth models, a central point of our investigation is the study of the chaotic dynamics and the bifurcation structure of Weibull-Gompertz-Fréchet's functions: a class of continuousdefined one-dimensional maps. Using symbolic dynamics techniques and iteration theory, we established that depending on the properties of this class of functions in a neighborhood of a bifurcation point PBB, in a two-dimensional parameter space, there exists an order regarding how the infinite number of periodic orbits are born: the Sharkovsky ordering. Consequently, the corresponding symbolic sequences follow the usual unimodal kneading sequences in the topological ordered tree. We verified that under some sufficient conditions, Weibull-Gompertz-Fréchet's functions have a particular bifurcation structure: a big bang bifurcation point PBB. This fractal bifurcations structure is of the so-called "box-within-a-box" type, associated to a boxe ω1, where an infinite number of bifurcation curves issues from. This analysis is done making use of fold and flip bifurcation curves and symbolic dynamics techniques. The present paper is an original contribution in the framework of the big bang bifurcation analysis for continuous maps.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
By identifying types whose low-order beliefs up to level li about the state of nature coincide, weobtain quotient type spaces that are typically smaller than the original ones, preserve basic topologicalproperties, and allow standard equilibrium analysis even under bounded reasoning. Our Bayesian Nash(li; l-i)-equilibria capture players inability to distinguish types belonging to the same equivalence class.The case with uncertainty about the vector of levels (li; l-i) is also analyzed. Two examples illustratethe constructions.