12 resultados para Minimization algorithm
em Boston University Digital Common
Resumo:
Similarly to protein folding, the association of two proteins is driven by a free energy funnel, determined by favorable interactions in some neighborhood of the native state. We describe a docking method based on stochastic global minimization of funnel-shaped energy functions in the space of rigid body motions (SE(3)) while accounting for flexibility of the interface side chains. The method, called semi-definite programming-based underestimation (SDU), employs a general quadratic function to underestimate a set of local energy minima and uses the resulting underestimator to bias further sampling. While SDU effectively minimizes functions with funnel-shaped basins, its application to docking in the rotational and translational space SE(3) is not straightforward due to the geometry of that space. We introduce a strategy that uses separate independent variables for side-chain optimization, center-to-center distance of the two proteins, and five angular descriptors of the relative orientations of the molecules. The removal of the center-to-center distance turns out to vastly improve the efficiency of the search, because the five-dimensional space now exhibits a well-behaved energy surface suitable for underestimation. This algorithm explores the free energy surface spanned by encounter complexes that correspond to local free energy minima and shows similarity to the model of macromolecular association that proceeds through a series of collisions. Results for standard protein docking benchmarks establish that in this space the free energy landscape is a funnel in a reasonably broad neighborhood of the native state and that the SDU strategy can generate docking predictions with less than 5 � ligand interface Ca root-mean-square deviation while achieving an approximately 20-fold efficiency gain compared to Monte Carlo methods.
Resumo:
We give a hybrid algorithm for parsing epsilon grammars based on Tomita's non-ϵ-grammar parsing algorithm ([Tom86]) and Nozohoor-Farshi's ϵ-grammar recognition algorithm ([NF91]). The hybrid parser handles the same set of grammars handled by Nozohoor-Farshi's recognizer. The algorithm's details and an example of its use are given. We also discuss the deployment of the hybrid algorithm within a GB parser, and the reason an ϵ grammar parser is needed in our GB parser.
Resumo:
We study the problem of type inference for a family of polymorphic type disciplines containing the power of Core-ML. This family comprises all levels of the stratification of the second-order lambda-calculus by "rank" of types. We show that typability is an undecidable problem at every rank k ≥ 3 of this stratification. While it was already known that typability is decidable at rank ≤ 2, no direct and easy-to-implement algorithm was available. To design such an algorithm, we develop a new notion of reduction and show how to use it to reduce the problem of typability at rank 2 to the problem of acyclic semi-unification. A by-product of our analysis is the publication of a simple solution procedure for acyclic semi-unification.
Resumo:
This paper presents a lower-bound result on the computational power of a genetic algorithm in the context of combinatorial optimization. We describe a new genetic algorithm, the merged genetic algorithm, and prove that for the class of monotonic functions, the algorithm finds the optimal solution, and does so with an exponential convergence rate. The analysis pertains to the ideal behavior of the algorithm where the main task reduces to showing convergence of probability distributions over the search space of combinatorial structures to the optimal one. We take exponential convergence to be indicative of efficient solvability for the sample-bounded algorithm, although a sampling theory is needed to better relate the limit behavior to actual behavior. The paper concludes with a discussion of some immediate problems that lie ahead.
Resumo:
This report presents an algorithm, and its implementation, for doing type inference in the context of Quasi-Static Typing (QST) ["Quasy-static Typing." Satish Thatte Proc. ACM Symp. on Principles of Programming Languages, 1988]. The package infers types a la "QST" for the simply typed λ-calculus.
Resumo:
We present an online distributed algorithm, the Causation Logging Algorithm (CLA), in which Autonomous Systems (ASes) in the Internet individually report route oscillations/flaps they experience to a central Internet Routing Registry (IRR). The IRR aggregates these reports and may observe what we call causation chains where each node on the chain caused a route flap at the next node along the chain. A chain may also have a causation cycle. The type of an observed causation chain/cycle allows the IRR to infer the underlying policy routing configuration (i.e., the system of economic relationships and constraints on route/path preferences). Our algorithm is based on a formal policy routing model that captures the propagation dynamics of route flaps under arbitrary changes in topology or path preferences. We derive invariant properties of causation chains/cycles for ASes which conform to economic relationships based on the popular Gao-Rexford model. The Gao-Rexford model is known to be safe in the sense that the system always converges to a stable set of paths under static conditions. Our CLA algorithm recovers the type/property of an observed causation chain of an underlying system and determines whether it conforms to the safe economic Gao-Rexford model. Causes for nonconformity can be diagnosed by comparing the properties of the causation chains with those predicted from different variants of the Gao-Rexford model.
Resumo:
In this paper we introduce a theory of policy routing dynamics based on fundamental axioms of routing update mechanisms. We develop a dynamic policy routing model (DPR) that extends the static formalism of the stable paths problem (introduced by Griffin et al.) with discrete synchronous time. DPR captures the propagation of path changes in any dynamic network irrespective of its time-varying topology. We introduce several novel structures such as causation chains, dispute fences and policy digraphs that model different aspects of routing dynamics and provide insight into how these dynamics manifest in a network. We exercise the practicality of the theoretical foundation provided by DPR with two fundamental problems: routing dynamics minimization and policy conflict detection. The dynamics minimization problem utilizes policy digraphs, that capture the dependencies in routing policies irrespective of underlying topology dynamics, to solve a graph optimization problem. This optimization problem explicitly minimizes the number of routing update messages in a dynamic network by optimally changing the path preferences of a minimal subset of nodes. The conflict detection problem, on the other hand, utilizes a theoretical result of DPR where the root cause of a causation cycle (i.e., cycle of routing update messages) can be precisely inferred as either a transient route flap or a dispute wheel (i.e., policy conflict). Using this result we develop SafetyPulse, a token-based distributed algorithm to detect policy conflicts in a dynamic network. SafetyPulse is privacy preserving, computationally efficient, and provably correct.
Resumo:
The relative importance of long-term popularity and short-term temporal correlation of references for Web cache replacement policies has not been studied thoroughly. This is partially due to the lack of accurate characterization of temporal locality that enables the identification of the relative strengths of these two sources of temporal locality in a reference stream. In [21], we have proposed such a metric and have shown that Web reference streams differ significantly in the prevalence of these two sources of temporal locality. These finding underscore the importance of a Web caching strategy that can adapt in a dynamic fashion to the prevalence of these two sources of temporal locality. In this paper, we propose a novel cache replacement algorithm, GreedyDual*, which is a generalization of GreedyDual-Size. GreedyDual* uses the metrics proposed in [21] to adjust the relative worth of long-term popularity versus short-term temporal correlation of references. Our trace-driven simulation experiments show the superior performance of GreedyDual* when compared to other Web cache replacement policies proposed in the literature.
Resumo:
Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
A fast and efficient segmentation algorithm based on the Boundary Contour System/Feature Contour System (BCS/FCS) of Grossberg and Mingolla [3] is presented. This implementation is based on the FFT algorithm and the parallelism of the system.
Resumo:
The Fuzzy ART system introduced herein incorporates computations from fuzzy set theory into ART 1. For example, the intersection (n) operator used in ART 1 learning is replaced by the MIN operator (A) of fuzzy set theory. Fuzzy ART reduces to ART 1 in response to binary input vectors, but can also learn stable categories in response to analog input vectors. In particular, the MIN operator reduces to the intersection operator in the binary case. Learning is stable because all adaptive weights can only decrease in time. A preprocessing step, called complement coding, uses on-cell and off-cell responses to prevent category proliferation. Complement coding normalizes input vectors while preserving the amplitudes of individual feature activations.
Resumo:
This article introduces ART 2-A, an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neural computation.