234 resultados para Pollards rho algorithm
Resumo:
A fundamental task in bioinformatics involves a transfer of knowledge from one protein molecule onto another by way of recognizing similarities. Such similarities are obtained at different levels, that of sequence, whole fold, or important substructures. Comparison of binding sites is important to understand functional similarities among the proteins and also to understand drug cross-reactivities. Current methods in literature have their own merits and demerits, warranting exploration of newer concepts and algorithms, especially for large-scale comparisons and for obtaining accurate residue-wise mappings. Here, we report the development of a new algorithm, PocketAlign, for obtaining structural superpositions of binding sites. The software is available as a web-service at http://proline.physicslisc.emetin/pocketalign/. The algorithm encodes shape descriptors in the form of geometric perspectives, supplemented by chemical group classification. The shape descriptor considers several perspectives with each residue as the focus and captures relative distribution of residues around it in a given site. Residue-wise pairings are computed by comparing the set of perspectives of the first site with that of the second, followed by a greedy approach that incrementally combines residue pairings into a mapping. The mappings in different frames are then evaluated by different metrics encoding the extent of alignment of individual geometric perspectives. Different initial seed alignments are computed, each subsequently extended by detecting consequential atomic alignments in a three-dimensional grid, and the best 500 stored in a database. Alignments are then ranked, and the top scoring alignments reported, which are then streamed into Pymol for visualization and analyses. The method is validated for accuracy and sensitivity and benchmarked against existing methods. An advantage of PocketAlign, as compared to some of the existing tools available for binding site comparison in literature, is that it explores different schemes for identifying an alignment thus has a better potential to capture similarities in ligand recognition abilities. PocketAlign, by finding a detailed alignment of a pair of sites, provides insights as to why two sites are similar and which set of residues and atoms contribute to the similarity.
Resumo:
Motion Estimation is one of the most power hungry operations in video coding. While optimal search (eg. full search)methods give best quality, non optimal methods are often used in order to reduce cost and power. Various algorithms have been used in practice that trade off quality vs. complexity. Global elimination is an algorithm based on pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. We propose an adaptive version of the global elimination algorithm that extracts individual macro-block features using Hadamard transform to optimize the search. Performance achieved is close to the full search method and global elimination. Operational complexity and hence power is reduced by 30% to 45% compared to global elimination method.
Resumo:
Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A scatternet is established by linking several piconets together in ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a polling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in a Bluetooth Scatternet. Experimental results from our proposed algorithm show performance improvements over a well known existing algorithm.
Resumo:
Use of engineered landfills for the disposal of industrial wastes is currently a common practice. Bentonite is attracting a greater attention not only as capping and lining materials in landfills but also as buffer and backfill materials for repositories of high-level nuclear waste around the world. In the design of buffer and backfill materials, it is important to know the swelling pressures of compacted bentonite with different electrolyte solutions. The theoretical studies on swell pressure behaviour are all based on Diffuse Double Layer (DDL) theory. To establish a relation between the swell pressure and void ratio of the soil, it is necessary to calculate the mid-plane potential in the diffuse part of the interacting ionic double layers. The difficulty in these calculations is the elliptic integral involved in the relation between half space distance and mid plane potential. Several investigators circumvented this problem using indirect methods or by using cumbersome numerical techniques. In this work, a novel approach is proposed for theoretical estimations of swell pressures of fine-grained soil from the DDL theory. The proposed approach circumvents the complex computations in establishing the relationship between mid-plane potential and diffused plates’ distances in other words, between swell pressure and void ratio.
Resumo:
The 4ÃÂ4 discrete cosine transform is one of the most important building blocks for the emerging video coding standard, viz. H.264. The conventional implementation does some approximation to the transform matrix elements to facilitate integer arithmetic, for which hardware is suitably prepared. Though the transform coding does not involve any multiplications, quantization process requires sixteen 16-bit multiplications. The algorithm used here eliminates the process of approximation in transform coding and multiplication in the quantization process, by usage of algebraic integer coding. We propose an area-efficient implementation of the transform and quantization blocks based on the algebraic integer coding. The designs were synthesized with 90 nm TSMC CMOS technology and were also implemented on a Xilinx FPGA. The gate counts and throughput achievable in this case are 7000 and 125 Msamples/sec.
Resumo:
In Universal Mobile Telecommunication Systems (UMTS), the Downlink Shared Channel (DSCH) can be used for providing streaming services. The traffic model for streaming services is different from the commonly used continuously- backlogged model. Each connection specifies a required service rate over an interval of time, k, called the "control horizon". In this paper, our objective is to determine how k DSCH frames should be shared among a set of I connections. We need a scheduler that is efficient and fair and introduce the notion of discrepancy to balance the conflicting requirements of aggregate throughput and fairness. Our motive is to schedule the mobiles in such a way that the schedule minimizes the discrepancy over the k frames. We propose an optimal and computationally efficient algorithm, called STEM+. The proof of the optimality of STEM+, when applied to the UMTS rate sets is the major contribution of this paper. We also show that STEM+ performs better in terms of both fairness and aggregate throughput compared to other scheduling algorithms. Thus, STEM+ achieves both fairness and efficiency and is therefore an appealing algorithm for scheduling streaming connections.
Resumo:
This paper investigates in-line spring-mass systems (An), fixed at one end and free at the other, with n-degrees of freedom (d.f.). The objective is to find feasible in-line systems (B(n)) that are isospectral to a given system. The spring-mass systems, A(n) and B(n), are represented by Jacobi matrices. An error function is developed with the help of the Jacobi matrices A(n) and B(n). The problem of finding the isospectral systems is posed as an optimization problem with the aim of minimizing the error function. The approach for creating isospectral systems uses the fact that the trace of two isospectral Jacobi matrices A(n) and B(n) should be identical. A modification is made to the diagonal elements of the given Jacobi matrix (A(n)), to create the isospectral systems. The optimization problem is solved using the firefly algorithm augmented by a local search procedure. Numerical results are obtained and resulting isospectral systems are shown for 4 d.f. and 10 d.f. systems.
Resumo:
We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy. 1
Resumo:
We present a fast algorithm for computing a Gomory-Hu tree or cut tree for an unweighted undirected graph G = (V,E). The expected running time of our algorithm is Õ(mc) where |E| = m and c is the maximum u-vedge connectivity, where u,v ∈ V. When the input graph is also simple (i.e., it has no parallel edges), then the u-v edge connectivity for each pair of vertices u and v is at most n-1; so the expected running time of our algorithm for simple unweighted graphs is Õ(mn).All the algorithms currently known for constructing a Gomory-Hu tree [8,9] use n-1 minimum s-t cut (i.e., max flow) subroutines. This in conjunction with the current fastest Õ(n20/9) max flow algorithm due to Karger and Levine [11] yields the current best running time of Õ(n20/9n) for Gomory-Hu tree construction on simpleunweighted graphs with m edges and n vertices. Thus we present the first Õ(mn) algorithm for constructing a Gomory-Hu tree for simple unweighted graphs.We do not use a max flow subroutine here; we present an efficient tree packing algorithm for computing Steiner edge connectivity and use this algorithm as our main subroutine. The advantage in using a tree packing algorithm for constructing a Gomory-Hu tree is that the work done in computing a minimum Steiner cut for a Steiner set S ⊆ V can be reused for computing a minimum Steiner cut for certain Steiner sets S' ⊆ S.