858 resultados para optimization-based similarity reasoning
Resumo:
Abstract Brazilian wine production is characterized by Vitis labrusca grape varieties, especially the economically important Isabel cultivar, with over 80% of its production destined for table wine production. The objective of this study was to optimize and validate the conditions for extracting volatile compounds from wine with the solid-phase microextraction technique, using the response surface method. Based on the response surface analysis, it can be concluded that the central point values maximize the process of extracting volatile compounds from wine, i.e., an equilibrium time of 15 minutes, an extraction time of 35 minutes, and an extraction temperature of 30 °C. Esters were the most numerous compounds found under these extraction conditions, indicating that wines made from Isabel cultivar grapes are characterized by compounds that confer a fruity aroma; this finding corroborates the scientific literature.
Resumo:
In the industry of the case company, transportation and warehousing costs account for more than 10% of the total cost which is more than on average. A Finnish company has an understanding that by sending larger shipments in parcels, they could save tens of thousands of euros annually in freight costs in Finland’s domestic shipments. To achieve these savings and optimize total logistics cost, company’s interest is to find out which is the cost efficient way of shipping road shipments of certain volumes; in parcel boxes or on pallets, and what should be the split volume determining the shipment type. Distribution center (DC) costs affect this decision and therefore they need to be also evaluated to determine the total logistics cost savings. Main results were achieved by executing activity-based costing-calculations including DC and road freight costs to determine the ideal split volume with which the total logistics cost is optimal. Calculations were done for Finland’s DC, separately for two main road freight destinations, Finland and Sweden, which cover 50% of road shipment spend. Data for calculations was collected both manually and automatically from various internal and external sources, such as the company ERP system and logistics service providers’ (LSP) reporting. DC processes were studied in practice and compared to model processes. Currently used freight rates were compared to existing pricing models and freight service tendering process was evaluated by participating in the process and comparing it to the models based on literature. The results show that the potential savings are not as significant as the company hoped for, mainly because of packing work increasing DC labor cost. Annual savings by setting ideal split volume per country would account for 0,4 % of the warehousing and transportation costs of shipments in scope of this thesis. Split volume should be set separately for each route, mainly because the pricing model for road freight is different in each country. For some routes bigger parcels should be sent but for some routes pallets should be used more. Next step is to do these calculations for remaining routes to determine total savings potential. Other findings show that the processes in the DC are designed well and the company could achieve savings by executing tenders more efficiently. Company should also pay more attention to parcel pricing and packing the shipments accordingly.
Resumo:
Liberalization of electricity markets has resulted in a competed Nordic electricity market, in which electricity retailers play a key role as electricity suppliers, market intermediaries, and service providers. Although these roles may remain unchanged in the near future, the retailers’ operation may change fundamentally as a result of the emerging smart grid environment. Especially the increasing amount of distributed energy resources (DER), and improving opportunities for their control, are reshaping the operating environment of the retailers. This requires that the retailers’ operation models are developed to match the operating environment, in which the active use of DER plays a major role. Electricity retailers have a clientele, and they operate actively in the electricity markets, which makes them a natural market party to offer new services for end-users aiming at an efficient and market-based use of DER. From the retailer’s point of view, the active use of DER can provide means to adapt the operation to meet the challenges posed by the smart grid environment, and to pursue the ultimate objective of the retailer, which is to maximize the profit of operation. This doctoral dissertation introduces a methodology for the comprehensive use of DER in an electricity retailer’s short-term profit optimization that covers operation in a variety of marketplaces including day-ahead, intra-day, and reserve markets. The analysis results provide data of the key profit-making opportunities and the risks associated with different types of DER use. Therefore, the methodology may serve as an efficient tool for an experienced operator in the planning of the optimal market-based DER use. The key contributions of this doctoral dissertation lie in the analysis and development of the model that allows the retailer to benefit from profit-making opportunities brought by the use of DER in different marketplaces, but also to manage the major risks involved in the active use of DER. In addition, the dissertation introduces an analysis of the economic potential of DER control actions in different marketplaces including the day-ahead Elspot market, balancing power market, and the hourly market of Frequency Containment Reserve for Disturbances (FCR-D).
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
The Two-Connected Network with Bounded Ring (2CNBR) problem is a network design problem addressing the connection of servers to create a survivable network with limited redirections in the event of failures. Particle Swarm Optimization (PSO) is a stochastic population-based optimization technique modeled on the social behaviour of flocking birds or schooling fish. This thesis applies PSO to the 2CNBR problem. As PSO is originally designed to handle a continuous solution space, modification of the algorithm was necessary in order to adapt it for such a highly constrained discrete combinatorial optimization problem. Presented are an indirect transcription scheme for applying PSO to such discrete optimization problems and an oscillating mechanism for averting stagnation.
Resumo:
(A) Solid phase synthesis of oligonucleotides are well documented and are extensively studied as the demands continue to rise with the development of antisense, anti-gene, RNA interference, and aptamers. Although synthesis of RNA sequences faces many challenges, most notably the choice of the 2' -hydroxy protecting group, modified 2' -O-Cpep protected ribonucleotides were synthesized as alternitive building blocks. Altering phosphitylation procedures to incorporate 3' -N,N-diethyl phosphoramidites enhanced the overall reactivity, thus, increased the coupling efficiency without loss of integrety. Furthermore, technical optimizations of solid phase synthesis cycles were carried out to allow for successful synthesis of a homo UIO sequences with a stepwise coupling efficiency reaching 99% and a final yield of 91 %. (B) Over the past few decades, dipyrrometheneboron difluoride (BODIPY) has gained recognition as one of the most versatile fluorophores. Currently, BODIPY labeling of oligonucleotides are carried out post-synthetically and to date, there lacks a method that allows for direct incorporation of BODIPY into oligonucleotides during solid phase synthesis. Therefore, synthesis of BODIPY derived phosphoramidites will provide an alternative method in obtaining fluorescently labelled oligonucleotides. A method for the synthesis and incorporation of the BODIPY analogues into oligonucleotides by phosphoramidite chemistry-based solid phase DNA synthesis is reported here. Using this approach, BODIPY-labeled TlO homopolymer and ISIS 5132 were successfully synthesized.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Formal verification of software can be an enormous task. This fact brought some software engineers to claim that formal verification is not feasible in practice. One possible method of supporting the verification process is a programming language that provides powerful abstraction mechanisms combined with intensive reuse of code. In this thesis we present a strongly typed functional object-oriented programming language. This language features type operators of arbitrary kind corresponding to so-called type protocols. Sub classing and inheritance is based on higher-order matching, i.e., utilizes type protocols as basic tool for reuse of code. We define the operational and axiomatic semantics of this language formally. The latter is the basis of the interactive proof assistant VOOP (Verified Object-Oriented Programs) that allows the user to prove equational properties of programs interactively.
Resumo:
Basic relationships between certain regions of space are formulated in natural language in everyday situations. For example, a customer specifies the outline of his future home to the architect by indicating which rooms should be close to each other. Qualitative spatial reasoning as an area of artificial intelligence tries to develop a theory of space based on similar notions. In formal ontology and in ontological computer science, mereotopology is a first-order theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts. We shall introduce abstract relation algebras and present their structural properties as well as their connection to algebras of binary relations. This will be followed by details of the expressiveness of algebras of relations for region based models. Mereotopology has been the main basis for most region based theories of space. Since its earliest inception many theories have been proposed for mereotopology in artificial intelligence among which Region Connection Calculus is most prominent. The expressiveness of the region connection calculus in relational logic is far greater than its original eight base relations might suggest. In the thesis we formulate ways to automatically generate representable relation algebras using spatial data based on region connection calculus. The generation of new algebras is a two pronged approach involving splitting of existing relations to form new algebras and refinement of such newly generated algebras. We present an implementation of a system for automating aforementioned steps and provide an effective and convenient interface to define new spatial relations and generate representable relational algebras.
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
This paper analyzes the measurement of the diversity of sets based on the dissimilarity of the objects contained in the set. We discuss axiomatic approaches to diversity measurement and examine the considerations underlying the application of specific measures. Our focus is on descriptive issues: rather than assuming a specific ethical position or restricting attention to properties that are appealing in specific applications, we address the foundations of the measurement issue as such in the context of diversity.
Resumo:
La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.
Resumo:
Ce mémoire décrit le développement d’une nouvelle méthodologie d’expansion de cycle irréversible à partir de N-alkyl-3,4-déhydroprolinols pour former des N-alkyl tétrahydropyridines 3-substituées en passant par un intermédiaire aziridinium bicyclique. Cette méthode permet l’introduction d’un vaste éventail de substituants à la position 3 et tolère bien la présence de groupements aux positions 2 et 6, donnant accès à des pipéridines mono-, di- ou trisubstituées avec un excellent diastéréocontrôle. De plus, il est démontré que l’information stéréogénique du 3,4-déhydroprolinol de départ est totalement transférée vers le produit tétrahydropyridine. Additionnellement, une méthodologie fut dévelopée pour la préparation des produits de départ 3,4-déhydroprolinols en forme énantiopure, avec ou sans substituants aux positions 2 et 5, avec un très bon stéréocontrôle. Le premier chapitre présente un résumé de la littérature sur le sujet, incluant un bref survol des méthodes existantes pour la synthèse de pipéridines 3-substituées, ainsi qu’une vue d’ensemble de la chimie des aziridiniums. L’hypothèse originale ainsi que le raisonnement pour l’entreprise de ce projet y sont également inclus. Le second chapitre traite de la synthèse des N-alkyl-3,4-déhydroprolinols utilisés comme produits de départ pour l’expansion de cycle vers les tétrahydropyridines 3-substituées, incluant deux routes synthétiques différentes pour leur formation. Le premier chemin synthétique utilise la L-trans-4-hydroxyproline comme produit de départ, tandis que le deuxième est basé sur une modification de la réaction de Petasis-Mannich suivie par une métathèse de fermeture de cycle, facilitant l’accès aux précurseurs pour l’expansion de cycle. Le troisième chapitre présente une preuve de concept de la viabilité du projet ainsi que l’optimisation des conditions réactionnelles pour l’expansion de cycle. De plus, il y est démontré que l’information stéréogénique des produits de départs est transférée vers les produits. iv Au quatrième chapitre, l’étendue des composés pouvant être synthétisés par cette méthodologie est présentée, ainsi qu’une hypothèse mécanistique expliquant les stéréochimies relatives observées. Une synthèse énantiosélective efficace et divergente de tétrahydropyridines 2,3-disubstituées est également documentée, où les deux substituants furent introduits à partir d’un intermédiaire commun en 3 étapes.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal