1000 resultados para Solovay-Kitaev algorithm
Resumo:
Ship tracking systems allow Maritime Organizations that are concerned with the Safety at Sea to obtain information on the current location and route of merchant vessels. Thanks to Space technology in recent years the geographical coverage of the ship tracking platforms has increased significantly, from radar based near-shore traffic monitoring towards a worldwide picture of the maritime traffic situation. The long-range tracking systems currently in operations allow the storage of ship position data over many years: a valuable source of knowledge about the shipping routes between different ocean regions. The outcome of this Master project is a software prototype for the estimation of the most operated shipping route between any two geographical locations. The analysis is based on the historical ship positions acquired with long-range tracking systems. The proposed approach makes use of a Genetic Algorithm applied on a training set of relevant ship positions extracted from the long-term storage tracking database of the European Maritime Safety Agency (EMSA). The analysis of some representative shipping routes is presented and the quality of the results and their operational applications are assessed by a Maritime Safety expert.
Resumo:
The care for a patient with ulcerative colitis (UC) remains challenging despite the fact that morbidity and mortality rates have been considerably reduced during the last 30 years. The traditional management with intravenous corticosteroids was modified by the introduction of ciclosporin and infliximab. In this review, we focus on the treatment of patients with moderate to severe UC. Four typical clinical scenarios are defined and discussed in detail. The treatment recommendations are based on current literature, published guidelines and reviews, and were discussed at a consensus meeting of Swiss experts in the field. Comprehensive treatment algorithms were developed, aimed for daily clinical practice.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.
Resumo:
We consider envy-free (and budget-balanced) rules that are least manipulable with respect to agents counting or with respect to utility gains. Recently it has been shown that for any profile of quasi-linear preferences, the outcome of any such least manipulable envy-free rule can be obtained via agent-k-linked allocations. This note provides an algorithm for identifying agent-k-linked allocations.
Resumo:
La microscopie par fluorescence de cellules vivantes produit de grandes quantités de données. Ces données sont composées d’une grande diversité au niveau de la forme des objets d’intérêts et possèdent un ratio signaux/bruit très bas. Pour concevoir un pipeline d’algorithmes efficaces en traitement d’image de microscopie par fluorescence, il est important d’avoir une segmentation robuste et fiable étant donné que celle-ci constitue l’étape initiale du traitement d’image. Dans ce mémoire, je présente MinSeg, un algorithme de segmentation d’image de microscopie par fluorescence qui fait peu d’assomptions sur l’image et utilise des propriétés statistiques pour distinguer le signal par rapport au bruit. MinSeg ne fait pas d’assomption sur la taille ou la forme des objets contenus dans l’image. Par ce fait, il est donc applicable sur une grande variété d’images. Je présente aussi une suite d’algorithmes pour la quantification de petits complexes dans des expériences de microscopie par fluorescence de molécules simples utilisant l’algorithme de segmentation MinSeg. Cette suite d’algorithmes a été utilisée pour la quantification d’une protéine nommée CENP-A qui est une variante de l’histone H3. Par cette technique, nous avons trouvé que CENP-A est principalement présente sous forme de dimère.
Resumo:
Dans des contextes de post-urgence tels que le vit la partie occidentale de la République Démocratique du Congo (RDC), l’un des défis cruciaux auxquels font face les hôpitaux ruraux est de maintenir un niveau de médicaments essentiels dans la pharmacie. Sans ces médicaments pour traiter les maladies graves, l’impact sur la santé de la population est significatif. Les hôpitaux encourent également des pertes financières dues à la péremption lorsque trop de médicaments sont commandés. De plus, les coûts du transport des médicaments ainsi que du superviseur sont très élevés pour les hôpitaux isolés ; les coûts du transport peuvent à eux seuls dépasser ceux des médicaments. En utilisant la province du Bandundu, RDC pour une étude de cas, notre recherche tente de déterminer la faisabilité (en termes et de la complexité du problème et des économies potentielles) d’un problème de routage synchronisé pour la livraison de médicaments et pour les visites de supervision. Nous proposons une formulation du problème de tournées de véhicules avec capacité limitée qui gère plusieurs exigences nouvelles, soit la synchronisation des activités, la préséance et deux fréquences d’activités. Nous mettons en œuvre une heuristique « cluster first, route second » avec une base de données géospatiales qui permet de résoudre le problème. Nous présentons également un outil Internet qui permet de visualiser les solutions sur des cartes. Les résultats préliminaires de notre étude suggèrent qu’une solution synchronisée pourrait offrir la possibilité aux hôpitaux ruraux d’augmenter l’accessibilité des services médicaux aux populations rurales avec une augmentation modique du coût de transport actuel.
Resumo:
A genetic algorithm has been used for null steering in phased and adaptive arrays . It has been shown that it is possible to steer the array null s precisely to the required interference directions and to achieve any prescribed null depths . A comparison with the results obtained from the analytic solution shows the advantages of using the genetic algorithm for null steering in linear array patterns
Resumo:
Assembly job shop scheduling problem (AJSP) is one of the most complicated combinatorial optimization problem that involves simultaneously scheduling the processing and assembly operations of complex structured products. The problem becomes even more complicated if a combination of two or more optimization criteria is considered. This thesis addresses an assembly job shop scheduling problem with multiple objectives. The objectives considered are to simultaneously minimizing makespan and total tardiness. In this thesis, two approaches viz., weighted approach and Pareto approach are used for solving the problem. However, it is quite difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. Two metaheuristic techniques namely, genetic algorithm and tabu search are investigated in this thesis for solving the multiobjective assembly job shop scheduling problems. Three algorithms based on the two metaheuristic techniques for weighted approach and Pareto approach are proposed for the multi-objective assembly job shop scheduling problem (MOAJSP). A new pairing mechanism is developed for crossover operation in genetic algorithm which leads to improved solutions and faster convergence. The performances of the proposed algorithms are evaluated through a set of test problems and the results are reported. The results reveal that the proposed algorithms based on weighted approach are feasible and effective for solving MOAJSP instances according to the weight assigned to each objective criterion and the proposed algorithms based on Pareto approach are capable of producing a number of good Pareto optimal scheduling plans for MOAJSP instances.
Resumo:
Decimal multiplication is an integral part of financial, commercial, and internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based on a novel RPS algorithm. This design uses n single digit multipliers for an n × n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining