801 resultados para Expectation maximization algorithm
Resumo:
The determination of the intersection curve between Bézier Surfaces may be seen as the composition of two separated problems: determining initial points and tracing the intersection curve from these points. The Bézier Surface is represented by a parametric function (polynomial with two variables) that maps a point in the tridimensional space from the bidimensional parametric space. In this article, it is proposed an algorithm to determine the initial points of the intersection curve of Bézier Surfaces, based on the solution of polynomial systems with the Projected Polyhedral Method, followed by a method for tracing the intersection curves (Marching Method with differential equations). In order to allow the use of the Projected Polyhedral Method, the equations of the system must be represented in terms of the Bernstein basis, and towards this goal it is proposed a robust and reliable algorithm to exactly transform a multivariable polynomial in terms of power basis to a polynomial written in terms of Bernstein basis .
Resumo:
In this paper we present an algorithm for the numerical simulation of the cavitation in the hydrodynamic lubrication of journal bearings. Despite the fact that this physical process is usually modelled as a free boundary problem, we adopted the equivalent variational inequality formulation. We propose a two-level iterative algorithm, where the outer iteration is associated to the penalty method, used to transform the variational inequality into a variational equation, and the inner iteration is associated to the conjugate gradient method, used to solve the linear system generated by applying the finite element method to the variational equation. This inner part was implemented using the element by element strategy, which is easily parallelized. We analyse the behavior of two physical parameters and discuss some numerical results. Also, we analyse some results related to the performance of a parallel implementation of the algorithm.
Resumo:
Tutkielma käyttää automaattista kuviontunnistusalgoritmia ja yleisiä kahden liukuvan keskiarvon leikkauspiste –sääntöjä selittääkseen Stuttgartin pörssissä toimivien yksityissijoittajien myynti-osto –epätasapainoa ja siten vastatakseen kysymykseen ”käyttävätkö yksityissijoittajat teknisen analyysin menetelmiä kaupankäyntipäätöstensä perustana?” Perusolettama sijoittajien käyttäytymisestä ja teknisen analyysin tuottavuudesta tehtyjen tutkimusten perusteella oli, että yksityissijoittajat käyttäisivät teknisen analyysin metodeja. Empiirinen tutkimus, jonka aineistona on DAX30 yhtiöiden data vuosilta 2009 – 2013, ei tuottanut riittävän selkeää vastausta tutkimuskysymykseen. Heikko todistusaineisto näyttää kuitenkin osoittavan, että yksityissijoittajat muuttavat kaupankäyntikäyttäytymistänsä eräiden kuvioiden ja leikkauspistesääntöjen ohjastamaan suuntaan.
Resumo:
This thesis reveals the topic of reputational risk management as a key element for business continuity and value maximization. The purpose of the work is to investigate reputational risk from the side of its definition, management (including legal requirements on this risk category) and measurement and to analyse reputational risk’s impact on business continuity and value maximization. To be able to do this, different respective articles, reports of financial institutions are gathered and constructive summaries and analysis are made. In order to deeply investigate the impact of reputational risk on business continuity and value maximization, it was chosen to study it from three aspects: 1) check the impact of stock valuation of 7 companies that experienced reputational catastrophe / risk, 2) analyse a case study on disagreements in management of reputational risk among case companies and impact on their respective performance, and 3) conduct a survey of financial sector companies in Liechtenstein to see how reputational risk management works in practice. The findings of the research showed a significant impact of reputation decadence on company’s value and trading volume, and showed crucial importance of post-crisis management for the company’s financial performance. The results of the qualitative research based on survey proved that companies consider reputational risk management as a one of the key elements for their business continuity and value maximization.
Resumo:
This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.
Resumo:
In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.
Resumo:
Le présent mémoire comprend un survol des principales méthodes de rendu en demi-tons, de l’analog screening à la recherche binaire directe en passant par l’ordered dither, avec une attention particulière pour la diffusion d’erreur. Ces méthodes seront comparées dans la perspective moderne de la sensibilité à la structure. Une nouvelle méthode de rendu en demi-tons par diffusion d’erreur est présentée et soumise à diverses évaluations. La méthode proposée se veut originale, simple, autant à même de préserver le caractère structurel des images que la méthode à l’état de l’art, et plus rapide que cette dernière par deux à trois ordres de magnitude. D’abord, l’image est décomposée en fréquences locales caractéristiques. Puis, le comportement de base de la méthode proposée est donné. Ensuite, un ensemble minutieusement choisi de paramètres permet de modifier ce comportement de façon à épouser les différents caractères fréquentiels locaux. Finalement, une calibration détermine les bons paramètres à associer à chaque fréquence possible. Une fois l’algorithme assemblé, toute image peut être traitée très rapidement : chaque pixel est attaché à une fréquence propre, cette fréquence sert d’indice pour la table de calibration, les paramètres de diffusion appropriés sont récupérés, et la couleur de sortie déterminée pour le pixel contribue en espérance à souligner la structure dont il fait partie.
Approximation de la distribution a posteriori d'un modèle Gamma-Poisson hiérarchique à effets mixtes
Resumo:
La méthode que nous présentons pour modéliser des données dites de "comptage" ou données de Poisson est basée sur la procédure nommée Modélisation multi-niveau et interactive de la régression de Poisson (PRIMM) développée par Christiansen et Morris (1997). Dans la méthode PRIMM, la régression de Poisson ne comprend que des effets fixes tandis que notre modèle intègre en plus des effets aléatoires. De même que Christiansen et Morris (1997), le modèle étudié consiste à faire de l'inférence basée sur des approximations analytiques des distributions a posteriori des paramètres, évitant ainsi d'utiliser des méthodes computationnelles comme les méthodes de Monte Carlo par chaînes de Markov (MCMC). Les approximations sont basées sur la méthode de Laplace et la théorie asymptotique liée à l'approximation normale pour les lois a posteriori. L'estimation des paramètres de la régression de Poisson est faite par la maximisation de leur densité a posteriori via l'algorithme de Newton-Raphson. Cette étude détermine également les deux premiers moments a posteriori des paramètres de la loi de Poisson dont la distribution a posteriori de chacun d'eux est approximativement une loi gamma. Des applications sur deux exemples de données ont permis de vérifier que ce modèle peut être considéré dans une certaine mesure comme une généralisation de la méthode PRIMM. En effet, le modèle s'applique aussi bien aux données de Poisson non stratifiées qu'aux données stratifiées; et dans ce dernier cas, il comporte non seulement des effets fixes mais aussi des effets aléatoires liés aux strates. Enfin, le modèle est appliqué aux données relatives à plusieurs types d'effets indésirables observés chez les participants d'un essai clinique impliquant un vaccin quadrivalent contre la rougeole, les oreillons, la rub\'eole et la varicelle. La régression de Poisson comprend l'effet fixe correspondant à la variable traitement/contrôle, ainsi que des effets aléatoires liés aux systèmes biologiques du corps humain auxquels sont attribués les effets indésirables considérés.
Resumo:
Des recherches antérieures sur les émotions en contexte organisationnel, notamment autour des notions de travail émotionnel, de contrat psychologique et d'équité, ont souvent soulevé la question de la rationalité et du caractère approprié ou non des manifestations émotionnelles, ainsi que sur les mécanismes utilisés pour contrôler et modérer celles-ci. Cependant, peu de recherche empirique a été effectuée sur la façon dont les employés eux-mêmes font sens de leurs émotions au travail et le processus par lequel ils parviennent à rendre celle-ci compréhensibles et légitimes, à la fois pour eux-mêmes et pour autrui. Au cours des dernières années, un courant de recherche émergent tend toutefois à mettre de côté la perspective normative / rationaliste pour soulever ce type de questions. Ainsi, au lieu d'être considérées comme des expériences strictement subjectives, privées, voire inaccessibles, les émotions y sont envisagées à travers les discours et les mises en récits dont elles font l’objet. Les émotions apparaissent ainsi non seulement exprimées dans le langage et la communication, mais construites et négociées à travers eux. La recherche présente développe empiriquement cette perspective émergente, notamment en faisant appel aux théories du sensemaking et de la narration, à travers l’analyse détaillée des récits de quatre employés chargés du soutien à la vente pour un revendeur de produits informatiques. En demandant à mes sujets de parler de leurs expériences émotionnelles et en analysant leurs réponses selon une méthodologie d’analyse narrative, cette recherche explore ainsi la façon dont les employés parviennent à construire le sens et la légitimité de leurs expériences émotionnelles. Les résultats suggèrent entre autres que ces processus de construction de sens sont très étroitement liés aux enjeux d’identité et de rôle.
Resumo:
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual’s ‘as if preferences’ (binary choices). Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it.