9 resultados para hybrid algorithm
em Brock University, Canada
Resumo:
This exploratory, descriptive action research study is based on a survey of a sample of convenience consisting of 172 college and university marketing students, and 5 professors who were experienced in teaching in an internet based environment. The students that were surveyed were studying e-commerce and international business in 3^^ and 4*'' year classes at a leading imiversity in Ontario and e-commerce in 5^ semester classes at a leading college. These classes were taught using a hybrid teaching style with the contribution of a large website that contained pertinent text and audio material. Hybrid teaching employs web based course materials (some in the form of Learning Objects) to deliver curriculimi material both during the attended lectures and also for students accessing the course web page outside of class hours. The survey was in the form on an online questionnaire. The research questions explored in this study were: 1. What factors influence the students' ability to access and learn from web based course content? 2. How likely are the students to use selected elements of internet based curriculum for learning academic content? 3. What is the preferred physical environment to facilitate learning in a hybrid environment? 4. How effective are selected teaching/learning strategies in a hybrid environment? The findings of this study suggest that students are very interested in being part of the learning process by contributing to a course web site. Specifically, students are interested in audio content being one of the formats of online course material, and have an interest in being part of the creation of small audio clips to be used in class.
Resumo:
The prediction of proteins' conformation helps to understand their exhibited functions, allows for modeling and allows for the possible synthesis of the studied protein. Our research is focused on a sub-problem of protein folding known as side-chain packing. Its computational complexity has been proven to be NP-Hard. The motivation behind our study is to offer the scientific community a means to obtain faster conformation approximations for small to large proteins over currently available methods. As the size of proteins increases, current techniques become unusable due to the exponential nature of the problem. We investigated the capabilities of a hybrid genetic algorithm / simulated annealing technique to predict the low-energy conformational states of various sized proteins and to generate statistical distributions of the studied proteins' molecular ensemble for pKa predictions. Our algorithm produced errors to experimental results within .acceptable margins and offered considerable speed up depending on the protein and on the rotameric states' resolution used.
Resumo:
The synthesis of 3-ethynylthienyl- (2.07), 3-ethynylterthienyl- (2.19) substituted qsal [qsalH = N-(8-quinolyl)salicylaldimine] and 3,3' -diethynyl-2,2' -bithienyl bridging bisqsal (5.06) ligands are described along with the preparation and characterization of eight cationic iron(III) complexes containing these ligands with a selection of counteranions [(2.07) with: SCN- (2.08), PF6- (2.09), and CI04- (2.10); (2.19) with PF6 - (2.20); (5.06) with: cr (5.07), SeN- (5.08), PF6- (5.09), and CI04- (5.10)]. Spin-crossover is observed in the solid state for (2.08) - (2.10) and (5.07) - (5.10), including a ve ry rare S = 5/2 to 3/2 spin-crossover in complex (2.09). The unusal reduction of complex (2.10) produces a high-spin iron(I1) complex (2.12). Six iron(II) complexes that are derived from thienyl analogues of bispicen [bispicen = bis(2-pyridylmethyl)-diamine] [2,5-thienyl substituents = H- (3.11), Phenyl- (3.12), 2- thienyl (3.13) or N-phenyl-2-pyridinalimine ligands [2,5-phenyl substituents = diphenyl (3.23), di(2-thienyl) (3.24), 4-phenyl substituent = 3-thienyl (3.25)] are reported Complexes (3.11), (3.23) and (3.25) display thermal spin-crossover in the solid state and (3.12) remains high-spin at all temperatures. Complex (3.13) rearranges to form an iron(II) complex (3.14) with temperature dependent magnetic properties be s t described as a one-dimensional ferromagnetic chain, with interchain antiferromagnetic interactions and/or ZFS dominant at low temperatures. Magnetic succeptibility and Mossbauer data for complex (3.24) display a temperature dependent mixture of spin isomers. The preparation and characterization of two cobalt(II) complexes containing 3- ethynylthienyl- (4.04) and 3-ethynylterhienyl- (4.06) substituted bipyridine ligands [(4.05): [Co(dbsqh(4.04)]; (4.07): [Co(dbsq)2(4.06)]] [dbsq = 3,5-dbsq=3,5-di-tert-butylI ,2-semiquinonate] are reported. Complexes (4.05) and (4.07) exhibit thermal valence tautomerism in the solid state and in solution. Self assembly of complex (2.10) into polymeric spheres (6.11) afforded the first spincrossover, polydisperse, micro- to nanoscale material of its kind. . Complexes (2.20), (3.24) and (4.07) also form polymers through electrochemical synthesis to produce hybrid metaUopolymer films (6.12), (6.15) and (6.16), respectively. The films have been characterized by EDX, FT-IR and UV-Vis spectroscopy. Variable-temperature magnetic susceptibility measurements demonstrate that spin lability is operative in the polymers and conductivity measurements confirm the electron transport properties. Polymer (6.15) has a persistent oxidized state that shows a significant decrease in electrical resistance.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.