15 resultados para Hybrid heuristic algorithm
em Brock University, Canada
Resumo:
Spatial data representation and compression has become a focus issue in computer graphics and image processing applications. Quadtrees, as one of hierarchical data structures, basing on the principle of recursive decomposition of space, always offer a compact and efficient representation of an image. For a given image, the choice of quadtree root node plays an important role in its quadtree representation and final data compression. The goal of this thesis is to present a heuristic algorithm for finding a root node of a region quadtree, which is able to reduce the number of leaf nodes when compared with the standard quadtree decomposition. The empirical results indicate that, this proposed algorithm has quadtree representation and data compression improvement when in comparison with the traditional method.
Resumo:
The prediction of proteins' conformation helps to understand their exhibited functions, allows for modeling and allows for the possible synthesis of the studied protein. Our research is focused on a sub-problem of protein folding known as side-chain packing. Its computational complexity has been proven to be NP-Hard. The motivation behind our study is to offer the scientific community a means to obtain faster conformation approximations for small to large proteins over currently available methods. As the size of proteins increases, current techniques become unusable due to the exponential nature of the problem. We investigated the capabilities of a hybrid genetic algorithm / simulated annealing technique to predict the low-energy conformational states of various sized proteins and to generate statistical distributions of the studied proteins' molecular ensemble for pKa predictions. Our algorithm produced errors to experimental results within .acceptable margins and offered considerable speed up depending on the protein and on the rotameric states' resolution used.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
There are a considerable number of programs and agencies that count on the existence of a unique relationship between nature and human development. In addition, there are significant bodies of literature dedicated to understanding developmentally focused nature-based experiences. This research project was designed to flirther the understanding of this phenomenon. Consequently, the purpose of this research endeavour was to discover the essence ofthe intersection ofpersonal transformation and nature-based leisure, culminating in a rich and detailed account of this otherwise tacit phenomenon. As such, this research built on the assumption of this beneficial intersection of nature and personal transformation and contributes to the understanding ofhow this context is supporting or generating of selfactualization and positive development. Heuristic methods were employed because heuristics is concerned with the quality and essence of an experience, not causal relationships (Moustakas, 1990). Heuristic inquiry begins with the primary researcher and her personal experience and knowledge of the phenomenon. This study also involved four other coresearchers who had also experienced this phenomenon intensely. Co-researchers were found through purposeful and snowball sampling. Rich narrative descriptions of their experiences were gathered through in-depth, semi-structured interviews, and artifact elicitation was employed as a means to get at co-researchers' tacit knowledge. Each coresearcher was interviewed twice (the first interview focused on personal transformation, the second on nature) for approximately four and a half hours in total. Transcripts were read repeatedly to discern patterns that emerged from the study of the narratives and were coded accordingly. Individual narratives were consolidated to create a composite narrative of the experience. Finally, a creative synthesis was developed to represent the essence of this tacit experience. In conclusion the essence of the intersection of nature-based leisure and personal transformation was found to lie in the convergence of the lived experience of authenticity. The physical environment of nature was perceived and experienced to be a space and context of authenticity, leisure experiences were experienced as an engagement of authenticity, and individuals themselves encountered a true or authentic self that emanated from within. The implications of these findings are many, offering suggestions, considerations and implications from reconsidered approaches to environmental education to support for selfdirected human development.
Resumo:
This exploratory, descriptive action research study is based on a survey of a sample of convenience consisting of 172 college and university marketing students, and 5 professors who were experienced in teaching in an internet based environment. The students that were surveyed were studying e-commerce and international business in 3^^ and 4*'' year classes at a leading imiversity in Ontario and e-commerce in 5^ semester classes at a leading college. These classes were taught using a hybrid teaching style with the contribution of a large website that contained pertinent text and audio material. Hybrid teaching employs web based course materials (some in the form of Learning Objects) to deliver curriculimi material both during the attended lectures and also for students accessing the course web page outside of class hours. The survey was in the form on an online questionnaire. The research questions explored in this study were: 1. What factors influence the students' ability to access and learn from web based course content? 2. How likely are the students to use selected elements of internet based curriculum for learning academic content? 3. What is the preferred physical environment to facilitate learning in a hybrid environment? 4. How effective are selected teaching/learning strategies in a hybrid environment? The findings of this study suggest that students are very interested in being part of the learning process by contributing to a course web site. Specifically, students are interested in audio content being one of the formats of online course material, and have an interest in being part of the creation of small audio clips to be used in class.
Resumo:
The synthesis of 3-ethynylthienyl- (2.07), 3-ethynylterthienyl- (2.19) substituted qsal [qsalH = N-(8-quinolyl)salicylaldimine] and 3,3' -diethynyl-2,2' -bithienyl bridging bisqsal (5.06) ligands are described along with the preparation and characterization of eight cationic iron(III) complexes containing these ligands with a selection of counteranions [(2.07) with: SCN- (2.08), PF6- (2.09), and CI04- (2.10); (2.19) with PF6 - (2.20); (5.06) with: cr (5.07), SeN- (5.08), PF6- (5.09), and CI04- (5.10)]. Spin-crossover is observed in the solid state for (2.08) - (2.10) and (5.07) - (5.10), including a ve ry rare S = 5/2 to 3/2 spin-crossover in complex (2.09). The unusal reduction of complex (2.10) produces a high-spin iron(I1) complex (2.12). Six iron(II) complexes that are derived from thienyl analogues of bispicen [bispicen = bis(2-pyridylmethyl)-diamine] [2,5-thienyl substituents = H- (3.11), Phenyl- (3.12), 2- thienyl (3.13) or N-phenyl-2-pyridinalimine ligands [2,5-phenyl substituents = diphenyl (3.23), di(2-thienyl) (3.24), 4-phenyl substituent = 3-thienyl (3.25)] are reported Complexes (3.11), (3.23) and (3.25) display thermal spin-crossover in the solid state and (3.12) remains high-spin at all temperatures. Complex (3.13) rearranges to form an iron(II) complex (3.14) with temperature dependent magnetic properties be s t described as a one-dimensional ferromagnetic chain, with interchain antiferromagnetic interactions and/or ZFS dominant at low temperatures. Magnetic succeptibility and Mossbauer data for complex (3.24) display a temperature dependent mixture of spin isomers. The preparation and characterization of two cobalt(II) complexes containing 3- ethynylthienyl- (4.04) and 3-ethynylterhienyl- (4.06) substituted bipyridine ligands [(4.05): [Co(dbsqh(4.04)]; (4.07): [Co(dbsq)2(4.06)]] [dbsq = 3,5-dbsq=3,5-di-tert-butylI ,2-semiquinonate] are reported. Complexes (4.05) and (4.07) exhibit thermal valence tautomerism in the solid state and in solution. Self assembly of complex (2.10) into polymeric spheres (6.11) afforded the first spincrossover, polydisperse, micro- to nanoscale material of its kind. . Complexes (2.20), (3.24) and (4.07) also form polymers through electrochemical synthesis to produce hybrid metaUopolymer films (6.12), (6.15) and (6.16), respectively. The films have been characterized by EDX, FT-IR and UV-Vis spectroscopy. Variable-temperature magnetic susceptibility measurements demonstrate that spin lability is operative in the polymers and conductivity measurements confirm the electron transport properties. Polymer (6.15) has a persistent oxidized state that shows a significant decrease in electrical resistance.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
The present work suggests that sentence processing requires both heuristic and algorithmic processing streams, where the heuristic processing strategy precedes the algorithmic phase. This conclusion is based on three self-paced reading experiments in which the processing of two-sentence discourses was investigated, where context sentences exhibited quantifier scope ambiguity. Experiment 1 demonstrates that such sentences are processed in a shallow manner. Experiment 2 uses the same stimuli as Experiment 1 but adds questions to ensure deeper processing. Results indicate that reading times are consistent with a lexical-pragmatic interpretation of number associated with context sentences, but responses to questions are consistent with the algorithmic computation of quantifier scope. Experiment 3 shows the same pattern of results as Experiment 2, despite using stimuli with different lexicalpragmatic biases. These effects suggest that language processing can be superficial, and that deeper processing, which is sensitive to structure, only occurs if required. Implications for recent studies of quantifier scope ambiguity are discussed.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
The purpose of this research was to examine the ways in which individuals with mental illness create a life of purpose, satisfaction and meaning. The data supported the identification of four common themes: (1) the power of leisure in activation, (2) the power of leisure in resiliency, (3) the power of leisure in identity and (4) the power of leisure in reducing struggle. Through an exploration of the experience of having a mental illness, this project supports that leisure provides therapeutic benefits that transcend through negative life events. In addition, this project highlights the individual nature of recovery as a process of self-discovery. Through the creation of a visual model, this project provides a benchmark for how a small group of individuals have experienced living well with mental illness. As such, this work brings new thought to the growing body of mental health and leisure studies literature.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
Epilepsy is a chronic neurological disorder characterized by recurrent seizures (Stein & Kanner, 2009). The purpose of this study was to understand the essence of being a young woman living with epilepsy using heuristic inquiry (Moustakas, 1990). The research was built upon the assumption that each experience is unique, yet commonalities exist. Five women aged 22 to 28 years living with epilepsy were interviewed. Additionally, the researcher described her life with epilepsy. Participants characterized life with epilepsy as a transformative journey. The act of meeting and interacting with another woman living with epilepsy provided an opportunity to remove themselves from the shadows and discuss epilepsy. Three major themes of seizures, medical treatment, and social relationships were developed revealing a complex view of an illness requiring engaged advocacy in the medical system. Respondents frequently make difficult adjustments to accommodate epilepsy. This study provides a complex in-depth view of life with epilepsy.
Resumo:
In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.