972 resultados para Computer algorithms -- TFM


Relevância:

20.00% 20.00%

Publicador:

Resumo:

When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work is to present a simple, practical and efficient protocol for drug design, in particular Diabetes, which includes selection of the illness, good choice of a target as well as a bioactive ligand and then usage of various computer aided drug design and medicinal chemistry tools to design novel potential drug candidates in different diseases. We have selected the validated target dipeptidyl peptidase IV (DPP-IV), whose inhibition contributes to reduce glucose levels in type 2 diabetes patients. The most active inhibitor with complex X-ray structure reported was initially extracted from the BindingDB database. By using molecular modification strategies widely used in medicinal chemistry, besides current state-of-the-art tools in drug design (including flexible docking, virtual screening, molecular interaction fields, molecular dynamics. ADME and toxicity predictions), we have proposed 4 novel potential DPP-IV inhibitors with drug properties for Diabetes control, which have been supported and validated by all the computational tools used herewith.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Phospholipases A(2) (PLA(2)) are enzymes commonly found in snake venoms from Viperidae and Elaphidae families, which are major components thereof. Many plants are used in traditional medicine its active agents against various effects induced by snakebite. This article presents the PLA(2) BthTX-I structure prediction based on homology modeling. In addition, we have performed virtual screening in a large database yielding a set of potential bioactive inhibitors. A flexible docking program was used to investigate the interactions between the receptor and the new ligands. We have performed molecular interaction fields (MIFs) calculations with the phospholipase model. Results confirm the important role of Lys49 for binding ligands and suggest three additional residues as well. We have proposed a theoretically nontoxic, drug-like, and potential novel BthTX-I inhibitor. These calculations have been used to guide the design of novel phospholipase inhibitors as potential lead compounds that may be optimized for future treatment of snakebite victims as well as other human diseases in which PLA(2) enzymes are involved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have used various computational methodologies including molecular dynamics, density functional theory, virtual screening, ADMET predictions and molecular interaction field studies to design and analyze four novel potential inhibitors of farnesyltransferase (FTase). Evaluation of two proposals regarding their drug potential as well as lead compounds have indicated them as novel promising FTase inhibitors, with theoretically interesting pharmacotherapeutic profiles, when Compared to the very active and most cited FTase inhibitors that have activity data reported, which are launched drugs or compounds in clinical tests. One of our two proposals appears to be a more promising drug candidate and FTase inhibitor, but both derivative molecules indicate potentially very good pharmacotherapeutic profiles in comparison with Tipifarnib and Lonafarnib, two reference pharmaceuticals. Two other proposals have been selected with virtual screening approaches and investigated by LIS, which suggest novel and alternatives scaffolds to design future potential FTase inhibitors. Such compounds can be explored as promising molecules to initiate a research protocol in order to discover novel anticancer drug candidates targeting farnesyltransferase, in the fight against cancer. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite many successes of conventional DNA sequencing methods, some DNAs remain difficult or impossible to sequence. Unsequenceable regions occur in the genomes of many biologically important organisms, including the human genome. Such regions range in length from tens to millions of bases, and may contain valuable information such as the sequences of important genes. The authors have recently developed a technique that renders a wide range of problematic DNAs amenable to sequencing. The technique is known as sequence analysis via mutagenesis (SAM). This paper presents a number of algorithms for analysing and interpreting data generated by this technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The XSophe-Sophe-XeprView((R)) computer simulation software suite enables scientists to easily determine spin Hamiltonian parameters from isotropic, randomly oriented and single crystal continuous wave electron paramagnetic resonance (CW EPR) spectra from radicals and isolated paramagnetic metal ion centers or clusters found in metalloproteins, chemical systems and materials science. XSophe provides an X-windows graphical user interface to the Sophe programme and allows: creation of multiple input files, local and remote execution of Sophe, the display of sophelog (output from Sophe) and input parameters/files. Sophe is a sophisticated computer simulation software programme employing a number of innovative technologies including; the Sydney OPera HousE (SOPHE) partition and interpolation schemes, a field segmentation algorithm, the mosaic misorientation linewidth model, parallelization and spectral optimisation. In conjunction with the SOPHE partition scheme and the field segmentation algorithm, the SOPHE interpolation scheme and the mosaic misorientation linewidth model greatly increase the speed of simulations for most spin systems. Employing brute force matrix diagonalization in the simulation of an EPR spectrum from a high spin Cr(III) complex with the spin Hamiltonian parameters g(e) = 2.00, D = 0.10 cm(-1), E/D = 0.25, A(x) = 120.0, A(y) = 120.0, A(z) = 240.0 x 10(-4) cm(-1) requires a SOPHE grid size of N = 400 (to produce a good signal to noise ratio) and takes 229.47 s. In contrast the use of either the SOPHE interpolation scheme or the mosaic misorientation linewidth model requires a SOPHE grid size of only N = 18 and takes 44.08 and 0.79 s, respectively. Results from Sophe are transferred via the Common Object Request Broker Architecture (CORBA) to XSophe and subsequently to XeprView((R)) where the simulated CW EPR spectra (1D and 2D) can be compared to the experimental spectra. Energy level diagrams, transition roadmaps and transition surfaces aid the interpretation of complicated randomly oriented CW EPR spectra and can be viewed with a web browser and an OpenInventor scene graph viewer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reconstruction of a complex scene from multiple images is a fundamental problem in the field of computer vision. Volumetric methods have proven to be a strong alternative to traditional correspondence-based methods due to their flexible visibility models. In this paper we analyse existing methods for volumetric reconstruction and identify three key properties of voxel colouring algorithms: a water-tight surface model, a monotonic carving order, and causality. We present a new Voxel Colouring algorithm which embeds all reconstructions of a scene into a single output. While modelling exact visibility for arbitrary camera locations, Embedded Voxel Colouring removes the need for a priori threshold selection present in previous work. An efficient implementation is given along with results demonstrating the advantages of posteriori threshold selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evolution strategies are a class of general optimisation algorithms which are applicable to functions that are multimodal, nondifferentiable, or even discontinuous. Although recombination operators have been introduced into evolution strategies, the primary search operator is still mutation. Classical evolution strategies rely on Gaussian mutations. A new mutation operator based on the Cauchy distribution is proposed in this paper. It is shown empirically that the new evolution strategy based on Cauchy mutation outperforms the classical evolution strategy on most of the 23 benchmark problems tested in this paper. The paper also shows empirically that changing the order of mutating the objective variables and mutating the strategy parameters does not alter the previous conclusion significantly, and that Cauchy mutations with different scaling parameters still outperform the Gaussian mutation with self-adaptation. However, the advantage of Cauchy mutations disappears when recombination is used in evolution strategies. It is argued that the search step size plays an important role in determining evolution strategies' performance. The large step size of recombination plays a similar role as Cauchy mutation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithms for explicit integration of structural dynamics problems with multiple time steps (subcycling) are investigated. Only one such algorithm, due to Smolinski and Sleith has proved to be stable in a classical sense. A simplified version of this algorithm that retains its stability is presented. However, as with the original version, it can be shown to sacrifice accuracy to achieve stability. Another algorithm in use is shown to be only statistically stable, in that a probability of stability can be assigned if appropriate time step limits are observed. This probability improves rapidly with the number of degrees of freedom in a finite element model. The stability problems are shown to be a property of the central difference method itself, which is modified to give the subcycling algorithm. A related problem is shown to arise when a constraint equation in time is introduced into a time-continuous space-time finite element model. (C) 1998 Elsevier Science S.A.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The objectives were to determine the postural consequences of varying computer monitor height and to describe self-selected monitor heights and postures. Design: The design involved experimental manipulation of computer monitor height, description of self-selected heights, and measurement of posture and gaze angles. Background. Disagreement exists with regard to the appropriate height of computer monitors. It is known that users alter both head orientation and gaze angle in response to changes in monitor height; however the relative contribution of atlanto-occipital and cervical flexion to the change in head rotation is unknown. No information is available with regard to self-selected monitor heights. Methods. Twelve students performed a tracking task with the monitor placed at three different heights. The subjects then completed eight trials in which monitor height was first self-selected. Sagittal postural and gaze angle data were determined by digitizing markers defining a two-dimensional three-link model of the trunk, cervical spine and head. Results. The 27 degrees change in monitor height imposed was, on average, accommodated by 18 degrees of head inclination and a 9 degrees change in gaze angle relative to the head. The change in head inclination was achieved by a 6 degrees change in trunk inclination, a 4 degrees change in cervical flexion, and a 7 degrees change in atlanto-occipital flexion. The self-selected height varied depending on the initial monitor height and inclination. Conclusions. Self-selected monitor heights were lower than current 'eye-level' recommendations. Lower monitor heights are likely to reduce both visual and musculoskeletal discomfort. Relevance Musculoskeletal and visual discomfort may be reduced by placing computer monitors lower than currently recommended. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.