11 resultados para Inverse Algorithm

em Brock University, Canada


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface size analyses of Twenty and Sixteen Mile Creeks, the Grand and Genesee Rivers and Cazenovia Creek show three distinct types of bed-surface sediment: 1) a "continuous" armor coat which has a mean size of -6.5 phi and coarser, 2) a "discontinuous" armor coat which has a mean size of approximately -6.0 phi and 3) a bed with no armor coat which has a mean surface size of -5.0 phi and finer. The continuous armor coat completely covers and protects the subsurface from the flow. The discontinuous armor coat is composed of intermittently-spaced surface clasts, which provide the subsurface with only limited protection from the flow. The bed with no armor coat allows complete exposure of the subsurface to the flow. The subsurface beneath the continuous armor coats of Twenty and Sixteen Mile Creeks is possibly modified by a "vertical winnowing" process when the armor coat is p«natrat«d. This process results in a welld «v«loped inversely graded sediment sequence.vertical winnowing is reduced beneath the discontinuous armor coats of the Grand and Genesee Rivers. The reduction of vertical winnowing results in a more poorly-developed inverse grading than that found in Twenty and sixteen Mile Creeks. The streambed of Cazenovia Creek normally is not armored resulting in a homogeneous subsurface which shows no modification by vertical winnowing. This streambed forms during waning or moderate flows, suggesting it does not represent the maximum competence of the stream. Each population of grains in the subsurface layers of Twenty and sixteen Mile Creeks has been modified by vertical winnowing and does not represent a mode of transport. Each population in the subsurface layers beneath a discontinuous armor coat may partially reflect a transport mode. These layers are still inversely graded suggesting that each population is affected to some degree by vertical winnowing. The populations for sediment beneath a surface which is not armored are probably indicative of transport modes because such sediment has not been modified by vertical winnowing. Bed photographs taken in each of the five streams before and after the 1982-83 snow-melt show that the probability of movement for the surface clasts is a function of grain size. The greatest probability of of clast movement and scour depth of this study were recorded on Cazenovia Creek in areas where no armor coat is present. The scour depth in the armored beds of Twenty and Sixteen Mile Creeks is related to the probability of movement for a given mean surface size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interior illumination is a complex problem involving numerous interacting factors. This research applies genetic programming towards problems in illumination design. The Radiance system is used for performing accurate illumination simulations. Radiance accounts for a number of important environmental factors, which we exploit during fitness evaluation. Illumination requirements include local illumination intensity from natural and artificial sources, colour, and uniformity. Evolved solutions incorporate design elements such as artificial lights, room materials, windows, and glass properties. A number of case studies are examined, including many-objective problems involving up to 7 illumination requirements, the design of a decorative wall of lights, and the creation of a stained-glass window for a large public space. Our results show the technical and creative possibilities of applying genetic programming to illumination design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

(A) Most azobenzene-based photoswitches require UV light for photoisomerization, which limit their applications in biological systems due to possible photodamage. Cyclic azobenzene derivatives, on the other hand, can undergo cis-trans isomerization when exposed to visible light. A shortened synthetic scheme was developed for the preparation of a building block containing cyclic azobenzene and D-threoninol (cAB-Thr). trans-Cyclic azobenzene was found to thermally isomerize back to the cis-form in a temperature-dependent manner. cAB-Thr was transformed into the corresponding phosphoramidite and subsequently incorporated into oligonucleotides by solid phase synthesis. Melting temperature measurement suggested that incorporation of cis-cAB into oligonucleotides destabilizes DNA duplexes, these findings corroborate with circular dichroism measurement. Finally, Fluorescent Energy Resonance Transfer experiments indicated that trans-cAB can be accommodated in DNA duplexes. (B) Inverse Electron Demand Diels-Alder reactions (IEDDA) between trans-olefins and tetrazines provide a powerful alternative to existing ligation chemistries due to its fast reaction rate, bioorthogonality and mutual orthogonality with other click reactions. In this project, an attempt was pursued to synthesize trans-cyclooctene building blocks for oligonucleotide labeling by reacting with BODIPY-tetrazine. Rel-(1R-4E-pR)-cyclooct-4-enol and rel-(1R,8S,9S,4E)-Bicyclo[6.1.0]non-4-ene-9-ylmethanol were synthesized and then transformed into the corresponding propargyl ether. Subsequent Sonogashira reactions between these propargylated compounds with DMT-protected 5-iododeoxyuridine failed to give the desired products. Finally a methodology was pursued for the synthesis of BODIPY-tetrazine conjugates that will be used in future IEDDA reactions with trans-cyclooctene modified oligonucleotides.