207 resultados para Weak Greedy Algorithms
Resumo:
Owing to the lack of atmospheric vertical profile data with sufficient accuracy and vertical resolution, the response of the deep atmosphere to passage of monsoon systems over the Bay of Bengal. had not been satisfactorily elucidated. Under the Indian Climate Research Programme, a special observational programme called 'Bay of Bengal Monsoon Experiment' (BOBMEX), was conducted during July-August 1999. The present study is based on the high-resolution radiosondes launched during BOBMEX in the north Bay. Clear changes in the vertical thermal structure of the atmosphere between active and weak phases of convection have been observed. The atmosphere cooled below 6 km height and became warmer between 6 and 13 km height. The warmest layer was located between 8 and 10 km height, and the coldest layer was found just below 5 km height. The largest fluctuations in the humidity field occurred in the mid-troposphere. The observed changes between active and weak phases of convection are compared with the results from an atmospheric general circulation model, which is similar to that used at the National Centre for Medium Range Weather Forecasting, New Delhi. The model is not able to capture realistically some important features of the temperature and humidity profiles in the lower troposphere and in the boundary layer during the active and weak spells.
Resumo:
We have imaged the H92alpha and H75alpha radio recombination line (RRL) emissions from the starburst galaxy NGC 253 with a resolution of similar to4 pc. The peak of the RRL emission at both frequencies coincides with the unresolved radio nucleus. Both lines observed toward the nucleus are extremely wide, with FWHMs of similar to200 km s(-1). Modeling the RRL and radio continuum data for the radio nucleus shows that the lines arise in gas whose density is similar to10(4) cm(-3) and mass is a few thousand M., which requires an ionizing flux of (6-20) x 10(51) photons s(-1). We consider a supernova remnant (SNR) expanding in a dense medium, a star cluster, and also an active galactic nucleus (AGN) as potential ionizing sources. Based on dynamical arguments, we rule out an SNR as a viable ionizing source. A star cluster model is considered, and the dynamics of the ionized gas in a stellar-wind driven structure are investigated. Such a model is only consistent with the properties of the ionized gas for a cluster younger than similar to10(5) yr. The existence of such a young cluster at the nucleus seems improbable. The third model assumes the ionizing source to be an AGN at the nucleus. In this model, it is shown that the observed X-ray flux is too weak to account for the required ionizing photon flux. However, the ionization requirement can be explained if the accretion disk is assumed to have a big blue bump in its spectrum. Hence, we favor an AGN at the nucleus as the source responsible for ionizing the observed RRLs. A hybrid model consisting of an inner advection-dominated accretion flow disk and an outer thin disk is suggested, which could explain the radio, UV, and X-ray luminosities of the nucleus.
Effect of repeated blast loading on damage characteristics of tunnels in weak rock mass-a case study
Resumo:
The study reports the first indication of a lyotropic liquid crystalline phase of an aqueous solution of polysaccharide xanthan gum, as a physical parameter dependent scalable and reversible weak alignment medium, for enantiodiscrimination of water soluble chiral molecules.
Resumo:
A fundamental task in bioinformatics involves a transfer of knowledge from one protein molecule onto another by way of recognizing similarities. Such similarities are obtained at different levels, that of sequence, whole fold, or important substructures. Comparison of binding sites is important to understand functional similarities among the proteins and also to understand drug cross-reactivities. Current methods in literature have their own merits and demerits, warranting exploration of newer concepts and algorithms, especially for large-scale comparisons and for obtaining accurate residue-wise mappings. Here, we report the development of a new algorithm, PocketAlign, for obtaining structural superpositions of binding sites. The software is available as a web-service at http://proline.physicslisc.emetin/pocketalign/. The algorithm encodes shape descriptors in the form of geometric perspectives, supplemented by chemical group classification. The shape descriptor considers several perspectives with each residue as the focus and captures relative distribution of residues around it in a given site. Residue-wise pairings are computed by comparing the set of perspectives of the first site with that of the second, followed by a greedy approach that incrementally combines residue pairings into a mapping. The mappings in different frames are then evaluated by different metrics encoding the extent of alignment of individual geometric perspectives. Different initial seed alignments are computed, each subsequently extended by detecting consequential atomic alignments in a three-dimensional grid, and the best 500 stored in a database. Alignments are then ranked, and the top scoring alignments reported, which are then streamed into Pymol for visualization and analyses. The method is validated for accuracy and sensitivity and benchmarked against existing methods. An advantage of PocketAlign, as compared to some of the existing tools available for binding site comparison in literature, is that it explores different schemes for identifying an alignment thus has a better potential to capture similarities in ligand recognition abilities. PocketAlign, by finding a detailed alignment of a pair of sites, provides insights as to why two sites are similar and which set of residues and atoms contribute to the similarity.
Resumo:
The crystal structure of Flunazirine, an anticonvulsant drug, is analyzed in terms of intermolecular interactions involving fluorine. The structure displays motifs formed by only weak interactions C–H⋯F and C–H⋯π. The motifs thus generated show cavities, which could serve as hosts for complexation. The structure of Flunazirine displays cavities formed by C–H⋯F and C–H⋯π interactions. Haloperidol, an antipsychotic drug, shows F⋯F interactions in the crystalline lattice in lieu of Cl⋯Cl interactions. However, strong O–H⋯N interactions dominate packing. The salient features of the two structures in terms of intermolecular interactions reveal, even though organic fluorine has lower tendency to engage in hydrogen bonding and F⋯F interactions, these interactions could play a significant role in the design of molecular assemblies via crystal engineering.
Resumo:
We have developed two reduced complexity bit-allocation algorithms for MP3/AAC based audio encoding, which can be useful at low bit-rates. One algorithm derives optimum bit-allocation using constrained optimization of weighted noise-to-mask ratio and the second algorithm uses decoupled iterations for distortion control and rate control, with convergence criteria. MUSHRA based evaluation indicated that the new algorithm would be comparable to AAC but requiring only about 1/10 th the complexity.
Resumo:
Web services are now a key ingredient of software services offered by software enterprises. Many standardized web services are now available as commodity offerings from web service providers. An important problem for a web service requester is the web service composition problem which involves selecting the right mix of web service offerings to execute an end-to-end business process. Web service offerings are now available in bundled form as composite web services and more recently, volume discounts are also on offer, based on the number of executions of web services requested. In this paper, we develop efficient algorithms for the web service composition problem in the presence of composite web service offerings and volume discounts. We model this problem as a combinatorial auction with volume discounts. We first develop efficient polynomial time algorithms when the end-to-end service involves a linear workflow of web services. Next we develop efficient polynomial time algorithms when the end-to-end service involves a tree workflow of web services.
Resumo:
We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes O(m 1/2) amortized time per arc and our second algorithm takes O(n 2.5/m) amortized time per arc, where n is the number of vertices and m is the total number of arcs. For sparse graphs, our O(m 1/2) bound improves the best previous bound by a factor of logn and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our O(n 2.5/m) bound improves the best previously published bound by a factor of n 1/4 and a recent bound obtained independently of our work by a factor of logn. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.
Resumo:
Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.
Resumo:
We propose two variants of the Q-learning algorithm that (both) use two timescales. One of these updates Q-values of all feasible state-action pairs at each instant while the other updates Q-values of states with actions chosen according to the ‘current ’ randomized policy updates. A sketch of convergence of the algorithms is shown. Finally, numerical experiments using the proposed algorithms for routing on different network topologies are presented and performance comparisons with the regular Q-learning algorithm are shown.
Resumo:
We present four new reinforcement learning algorithms based on actor-critic and natural-gradient ideas, and provide their convergence proofs. Actor-critic rein- forcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their com- patibility with function approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further re- duce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal differ- ence learning in the actor and by incorporating natural gradients, and they extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms.
Resumo:
Abstract. Let G = (V,E) be a weighted undirected graph, with non-negative edge weights. We consider the problem of efficiently computing approximate distances between all pairs of vertices in G. While many efficient algorithms are known for this problem in unweighted graphs, not many results are known for this problem in weighted graphs. Zwick [14] showed that for any fixed ε> 0, stretch 1 1 + ε distances between all pairs of vertices in a weighted directed graph on n vertices can be computed in Õ(n ω) time, where ω < 2.376 is the exponent of matrix multiplication and n is the number of vertices. It is known that finding distances of stretch less than 2 between all pairs of vertices in G is at least as hard as Boolean matrix multiplication of two n×n matrices. It is also known that all-pairs stretch 3 distances can be computed in Õ(n 2) time and all-pairs stretch 7/3 distances can be computed in Õ(n 7/3) time. Here we consider efficient algorithms for the problem of computing all-pairs stretch (2+ε) distances in G, for any 0 < ε < 1. We show that all pairs stretch (2 + ε) distances for any fixed ε> 0 in G can be computed in expected time O(n 9/4 logn). This algorithm uses a fast rectangular matrix multiplication subroutine. We also present a combinatorial algorithm (that is, it does not use fast matrix multiplication) with expected running time O(n 9/4) for computing all-pairs stretch 5/2 distances in G. 1
Resumo:
In this article, finite-time consensus algorithms for a swarm of self-propelling agents based on sliding mode control and graph algebraic theories are presented. Algorithms are developed for swarms that can be described by balanced graphs and that are comprised of agents with dynamics of the same order. Agents with first and higher order dynamics are considered. For consensus, the agents' inputs are chosen to enforce sliding mode on surfaces dependent on the graph Laplacian matrix. The algorithms allow for the tuning of the time taken by the swarm to reach a consensus as well as the consensus value. As an example, the case when a swarm of first-order agents is in cyclic pursuit is considered.