988 resultados para DISTRIBUTED OPTIMIZATION
Resumo:
Pumppauksessa arvioidaan olevan niin teknisesti kuin taloudellisestikin huomattavia mahdollisuuksia säästää energiaa. Maailmanlaajuisesti pumppaus kuluttaa lähes 22 % sähkö-moottorien energiantarpeesta. Tietyillä teollisuudenaloilla jopa yli 50 % moottorien käyttämästä sähköenergiasta voi kulua pumppaukseen. Jäteveden pumppauksessa pumppujen toiminta perustuu tyypillisesti on-off käyntiin, jolloin pumpun ollessa päällä se käy täydellä teholla. Monissa tapauksissa pumput ovat myös ylimitoitettuja. Yhdessä nämä seikat johtavat kasvaneeseen energian kulutukseen. Työn teoriaosassa esitellään perusteet jätevesihuollosta ja jäteveden käsittelystä sekä pumppaussysteemin pääkomponentit: pumppu, putkisto, moottori ja taajuusmuuttaja. Työn empiirisessä osassa esitellään työn aikana kehitetty laskuri, jonka avulla voidaan arvioida energiansäästöpotentiaalia jäteveden pumppaussysteemeissä. Laskurilla on mandollista laskea energiansäästöpotentiaali käytettäessä pumpun tuoton ohjaustapana pyörimisnopeuden säätöä taajuusmuuttajalla on-off säädön sijasta. Laskuri ilmoittaa optimaalisimmanpumpun pyörimisnopeuden sekä ominaisenergiankulutuksen. Perustuen laskuriin, kolme kunnallista jätevedenpumppaamoa tutkittiin. Myös laboratorio-testitsuoritettiin laskurin simuloimiseksi sekä energiansäästöpotentiaalin arvioimiseksi. Tutkimukset osoittavat, että jätevedenpumppauksessa on huomattavia mandollisuuksia säästää energiaa pumpun pyörimisnopeutta pienentämällä. Geodeettisen nostokorkeuden ollessa pieni, voidaan energiaa säästää jopa 50 % ja pitkällä aikavälillä säästö voi olla merkittävä. Tulokset vahvistavat myös tarpeen jätevedenpumppaussysteemien toiminnan optimoimiseksi.
Resumo:
In this thesis, cleaning of ceramic filter media was studied. Mechanisms of fouling and dissolution of iron compounds, as well as methods for cleaning ceramic membranes fouled by iron deposits were studied in the literature part. Cleaning agents and different methods were closer examined in the experimental part of the thesis. Pyrite is found in the geologic strata. It is oxidized to form ferrous ions Fe(II) and ferric ions Fe(III). Fe(III) is further oxidized in the hydrolysis to form ferric hydroxide. Hematite and goethite, for instance, are naturally occurring iron oxidesand hydroxides. In contact with filter media, they can cause severe fouling, which common cleaning techniques competent enough to remove. Mechanisms for the dissolution of iron oxides include the ligand-promoted pathway and the proton-promoted pathway. The dissolution can also be reductive or non-reductive. The most efficient mechanism is the ligand-promoted reductive mechanism that comprises two stages: the induction period and the autocatalytic dissolution.Reducing agents(such as hydroquinone and hydroxylamine hydrochloride), chelating agents (such as EDTA) and organic acids are used for the removal of iron compounds. Oxalic acid is the most effective known cleaning agent for iron deposits. Since formulations are often more effective than organic acids, reducing agents or chelating agents alone, the citrate¿bicarbonate¿dithionite system among others is well studied in the literature. The cleaning is also enhanced with ultrasound and backpulsing.In the experimental part, oxalic acid and nitric acid were studied alone andin combinations. Also citric acid and ascorbic acid among other chemicals were tested. Soaking experiments, experiments with ultrasound and experiments for alternative methods to apply the cleaning solution on the filter samples were carried out. Permeability and ISO Brightness measurements were performed to examine the influence of the cleaning methods on the samples. Inductively coupled plasma optical emission spectroscopy (ICP-OES) analysis of the solutions was carried out to determine the dissolved metals.
Resumo:
The present work aimed at maximizing the number of plantlets obtained by the micropropagation of pineapple (Ananas comosus (L.) Merrill) cv. Pérola. Changes in benzylaminopurine (BAP) concentration, type of medium (liquid or solidified) and the type of explant in the proliferation phase were evaluated. Slips were used as the explant source, which consisted of axillary buds obtained after careful excision of the leaves. A Sterilization was done in the hood with ethanol (70%), for three minutes, followed by calcium hypochlorite (2%), for fifteen minutes, and three washes in sterile water. The explants were introduced in MS medium supplemented with 2mg L-1 BAP and maintained in a growth room at a 16h photoperiod (40 mmol.m-2.s-1), 27 ± 2ºC. After eight weeks, cultures were subcultured for multiplication in MS medium. The following treatments were tested: liquid x solidified medium with different BAP concentrations (0.0, 1.5 or 3.0 mg L-1), and the longitudinal cut, or not, of the shoot bud used as explant. The results showed that liquid medium supplemented with BAP at 1.5 mg L-1, associated with the longitudinal sectioning of the shoot bud used as explant presented the best results, maximizing shoot proliferation. On average, the best treatment would allow for an estimated production of 161,080 plantlets by the micropropagation of the axillary buds of one plant with eight slips and ten buds/slips, within a period of eight months.
Resumo:
Tutkimus keskittyy kansainväliseen hajauttamiseen suomalaisen sijoittajan näkökulmasta. Tutkimuksen toinen tavoite on selvittää tehostavatko uudet kovarianssimatriisiestimaattorit minimivarianssiportfolion optimointiprosessia. Tavallisen otoskovarianssimatriisin lisäksi optimoinnissa käytetään kahta kutistusestimaattoria ja joustavaa monimuuttuja-GARCH(1,1)-mallia. Tutkimusaineisto koostuu Dow Jonesin toimialaindekseistä ja OMX-H:n portfolioindeksistä. Kansainvälinen hajautusstrategia on toteutettu käyttäen toimialalähestymistapaa ja portfoliota optimoidaan käyttäen kahtatoista komponenttia. Tutkimusaieisto kattaa vuodet 1996-2005 eli 120 kuukausittaista havaintoa. Muodostettujen portfolioiden suorituskykyä mitataan Sharpen indeksillä. Tutkimustulosten mukaan kansainvälisesti hajautettujen investointien ja kotimaisen portfolion riskikorjattujen tuottojen välillä ei ole tilastollisesti merkitsevää eroa. Myöskään uusien kovarianssimatriisiestimaattoreiden käytöstä ei synnytilastollisesti merkitsevää lisäarvoa verrattuna otoskovarianssimatrisiin perustuvaan portfolion optimointiin.
Resumo:
We propose a new approach and related indicators for globally distributed software support and development based on a 3-year process improvement project in a globally distributed engineering company. The company develops, delivers and supports a complex software system with tailored hardware components and unique end-customer installations. By applying the domain knowledge from operations management on lead time reduction and its multiple benefits to process performance, the workflows of globally distributed software development and multitier support processes were measured and monitored throughout the company. The results show that the global end-to-end process visibility and centrally managed reporting at all levels of the organization catalyzed a change process toward significantly better performance. Due to the new performance indicators based on lead times and their variation with fixed control procedures, the case company was able to report faster bug-fixing cycle times, improved response times and generally better customer satisfaction in its global operations. In all, lead times to implement new features and to respond to customer issues and requests were reduced by 50%.
Resumo:
An alternative relation to Pareto-dominance relation is proposed. The new relation is based on ranking a set of solutions according to each separate objective and an aggregation function to calculate a scalar fitness value for each solution. The relation is called as ranking-dominance and it tries to tackle the curse of dimensionality commonly observedin evolutionary multi-objective optimization. Ranking-dominance can beused to sort a set of solutions even for a large number of objectives when Pareto-dominance relation cannot distinguish solutions from one another anymore. This permits search to advance even with a large number of objectives. It is also shown that ranking-dominance does not violate Pareto-dominance. Results indicate that selection based on ranking-dominance is able to advance search towards the Pareto-front in some cases, where selection based on Pareto-dominance stagnates. However, in some cases it is also possible that search does not proceed into direction of Pareto-front because the ranking-dominance relation permits deterioration of individual objectives. Results also show that when the number of objectives increases, selection based on just Pareto-dominance without diversity maintenance is able to advance search better than with diversity maintenance. Therefore, diversity maintenance is connive at the curse of dimensionality.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
Small centrifugal compressors are more and more widely used in many industrialsystems because of their higher efficiency and better off-design performance comparing to piston and scroll compressors as while as higher work coefficient perstage than in axial compressors. Higher efficiency is always the aim of the designer of compressors. In the present work, the influence of four partsof a small centrifugal compressor that compresses heavy molecular weight real gas has been investigated in order to achieve higher efficiency. Two parts concern the impeller: tip clearance and the circumferential position of the splitter blade. The other two parts concern the diffuser: the pinch shape and vane shape. Computational fluid dynamics is applied in this study. The Reynolds averaged Navier-Stokes flow solver Finflo is used. The quasi-steady approach is utilized. Chien's k-e turbulence model is used to model the turbulence. A new practical real gas model is presented in this study. The real gas model is easily generated, accuracy controllable and fairly fast. The numerical results and measurements show good agreement. The influence of tip clearance on the performance of a small compressor is obvious. The pressure ratio and efficiency are decreased as the size of tip clearance is increased, while the total enthalpy rise keeps almost constant. The decrement of the pressure ratio and efficiency is larger at higher mass flow rates and smaller at lower mass flow rates. The flow angles at the inlet and outlet of the impeller are increased as the size of tip clearance is increased. The results of the detailed flow field show that leakingflow is the main reason for the performance drop. The secondary flow region becomes larger as the size of tip clearance is increased and the area of the main flow is compressed. The flow uniformity is then decreased. A detailed study shows that the leaking flow rate is higher near the exit of the impeller than that near the inlet of the impeller. Based on this phenomenon, a new partiallyshrouded impeller is used. The impeller is shrouded near the exit of the impeller. The results show that the flow field near the exit of the impeller is greatly changed by the partially shrouded impeller, and better performance is achievedthan with the unshrouded impeller. The loading distribution on the impeller blade and the flow fields in the impeller is changed by moving the splitter of the impeller in circumferential direction. Moving the splitter slightly to the suction side of the long blade can improve the performance of the compressor. The total enthalpy rise is reduced if only the leading edge of the splitter ismoved to the suction side of the long blade. The performance of the compressor is decreased if the blade is bended from the radius direction at the leading edge of the splitter. The total pressure rise and the enthalpy rise of thecompressor are increased if pinch is used at the diffuser inlet. Among the fivedifferent pinch shape configurations, at design and lower mass flow rates the efficiency of a straight line pinch is the highest, while at higher mass flow rate, the efficiency of a concave pinch is the highest. The sharp corner of the pinch is the main reason for the decrease of efficiency and should be avoided. The variation of the flow angles entering the diffuser in spanwise direction is decreased if pinch is applied. A three-dimensional low solidity twisted vaned diffuser is designed to match the flow angles entering the diffuser. The numerical results show that the pressure recovery in the twisted diffuser is higher than in a conventional low solidity vaned diffuser, which also leads to higher efficiency of the twisted diffuser. Investigation of the detailed flow fields shows that the separation at lower mass flow rate in the twisted diffuser is later than in the conventional low solidity vaned diffuser, which leads to a possible wider flow range of the twisted diffuser.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Resumo:
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.
Resumo:
The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.