5 resultados para Exact Algorithms
em National Center for Biotechnology Information - NCBI
Resumo:
Several basic olfactory tasks must be solved by highly olfactory animals, including background suppression, multiple object separation, mixture separation, and source identification. The large number N of classes of olfactory receptor cells—hundreds or thousands—permits the use of computational strategies and algorithms that would not be effective in a stimulus space of low dimension. A model of the patterns of olfactory receptor responses, based on the broad distribution of olfactory thresholds, is constructed. Representing one odor from the viewpoint of another then allows a common description of the most important basic problems and shows how to solve them when N is large. One possible biological implementation of these algorithms uses action potential timing and adaptation as the “hardware” features that are responsible for effective neural computation.
Resumo:
The pathognomonic plaques of Alzheimer’s disease are composed primarily of the 39- to 43-aa β-amyloid (Aβ) peptide. Crosslinking of Aβ peptides by tissue transglutaminase (tTg) indicates that Gln15 of one peptide is proximate to Lys16 of another in aggregated Aβ. Here we report how the fibril structure is resolved by mapping interstrand distances in this core region of the Aβ peptide chain with solid-state NMR. Isotopic substitution provides the source points for measuring distances in aggregated Aβ. Peptides containing a single carbonyl 13C label at Gln15, Lys16, Leu17, or Val18 were synthesized and evaluated by NMR dipolar recoupling methods for the measurement of interpeptide distances to a resolution of 0.2 Å. Analysis of these data establish that this central core of Aβ consists of a parallel β-sheet structure in which identical residues on adjacent chains are aligned directly, i.e., in register. Our data, in conjunction with existing structural data, establish that the Aβ fibril is a hydrogen-bonded, parallel β-sheet defining the long axis of the Aβ fibril propagation.
Resumo:
In this paper, we give two infinite families of explicit exact formulas that generalize Jacobi’s (1829) 4 and 8 squares identities to 4n2 or 4n(n + 1) squares, respectively, without using cusp forms. Our 24 squares identity leads to a different formula for Ramanujan’s tau function τ(n), when n is odd. These results arise in the setting of Jacobi elliptic functions, Jacobi continued fractions, Hankel or Turánian determinants, Fourier series, Lambert series, inclusion/exclusion, Laplace expansion formula for determinants, and Schur functions. We have also obtained many additional infinite families of identities in this same setting that are analogous to the η-function identities in appendix I of Macdonald’s work [Macdonald, I. G. (1972) Invent. Math. 15, 91–143]. A special case of our methods yields a proof of the two conjectured [Kac, V. G. and Wakimoto, M. (1994) in Progress in Mathematics, eds. Brylinski, J.-L., Brylinski, R., Guillemin, V. & Kac, V. (Birkhäuser Boston, Boston, MA), Vol. 123, pp. 415–456] identities involving representing a positive integer by sums of 4n2 or 4n(n + 1) triangular numbers, respectively. Our 16 and 24 squares identities were originally obtained via multiple basic hypergeometric series, Gustafson’s Cℓ nonterminating 6φ5 summation theorem, and Andrews’ basic hypergeometric series proof of Jacobi’s 4 and 8 squares identities. We have (elsewhere) applied symmetry and Schur function techniques to this original approach to prove the existence of similar infinite families of sums of squares identities for n2 or n(n + 1) squares, respectively. Our sums of more than 8 squares identities are not the same as the formulas of Mathews (1895), Glaisher (1907), Ramanujan (1916), Mordell (1917, 1919), Hardy (1918, 1920), Kac and Wakimoto, and many others.
Resumo:
There is a need for faster and more sensitive algorithms for sequence similarity searching in view of the rapidly increasing amounts of genomic sequence data available. Parallel processing capabilities in the form of the single instruction, multiple data (SIMD) technology are now available in common microprocessors and enable a single microprocessor to perform many operations in parallel. The ParAlign algorithm has been specifically designed to take advantage of this technology. The new algorithm initially exploits parallelism to perform a very rapid computation of the exact optimal ungapped alignment score for all diagonals in the alignment matrix. Then, a novel heuristic is employed to compute an approximate score of a gapped alignment by combining the scores of several diagonals. This approximate score is used to select the most interesting database sequences for a subsequent Smith–Waterman alignment, which is also parallelised. The resulting method represents a substantial improvement compared to existing heuristics. The sensitivity and specificity of ParAlign was found to be as good as Smith–Waterman implementations when the same method for computing the statistical significance of the matches was used. In terms of speed, only the significantly less sensitive NCBI BLAST 2 program was found to outperform the new approach. Online searches are available at http://dna.uio.no/search/
Resumo:
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science.