990 resultados para Optimal values


Relevância:

70.00% 70.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES: The SBP values to be achieved by antihypertensive therapy in order to maximize reduction of cardiovascular outcomes are unknown; neither is it clear whether in patients with a previous cardiovascular event, the optimal values are lower than in the low-to-moderate risk hypertensive patients, or a more cautious blood pressure (BP) reduction should be obtained. Because of the uncertainty whether 'the lower the better' or the 'J-curve' hypothesis is correct, the European Society of Hypertension and the Chinese Hypertension League have promoted a randomized trial comparing antihypertensive treatment strategies aiming at three different SBP targets in hypertensive patients with a recent stroke or transient ischaemic attack. As the optimal level of low-density lipoprotein cholesterol (LDL-C) level is also unknown in these patients, LDL-C-lowering has been included in the design. PROTOCOL DESIGN: The European Society of Hypertension-Chinese Hypertension League Stroke in Hypertension Optimal Treatment trial is a prospective multinational, randomized trial with a 3 × 2 factorial design comparing: three different SBP targets (1, <145-135; 2, <135-125; 3, <125 mmHg); two different LDL-C targets (target A, 2.8-1.8; target B, <1.8 mmol/l). The trial is to be conducted on 7500 patients aged at least 65 years (2500 in Europe, 5000 in China) with hypertension and a stroke or transient ischaemic attack 1-6 months before randomization. Antihypertensive and statin treatments will be initiated or modified using suitable registered agents chosen by the investigators, in order to maintain patients within the randomized SBP and LDL-C windows. All patients will be followed up every 3 months for BP and every 6 months for LDL-C. Ambulatory BP will be measured yearly. OUTCOMES: Primary outcome is time to stroke (fatal and non-fatal). Important secondary outcomes are: time to first major cardiovascular event; cognitive decline (Montreal Cognitive Assessment) and dementia. All major outcomes will be adjudicated by committees blind to randomized allocation. A Data and Safety Monitoring Board has open access to data and can recommend trial interruption for safety. SAMPLE SIZE CALCULATION: It has been calculated that 925 patients would reach the primary outcome after a mean 4-year follow-up, and this should provide at least 80% power to detect a 25% stroke difference between SBP targets and a 20% difference between LDL-C targets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os efeitos da concentração de ágar no crescimento de explantes e na formação de calos foram avaliados em culturas axênicas de gametófitos femininos de morfos de coloração verde e vermelha de Gracilaria domingensis (Kützing) Sonder ex Dickie. Culturas unialgáceas foram mantidas em água do mar esterilizada (30-32 ups) enriquecida com 25% da solução de von Stosch (VSES 25%), 22 ± 2 °C, fotoperíodo de 14 h, irradiância de 50-80 µmol de fótons m-2 s-1. Para a obtenção de explantes axênicos, segmentos apicais e intercalares dos dois morfos foram cultivados por 48 h em meio VSES 25% com adição de uma solução antibiótica e antifúngica, e submetidos a uma lavagem com uma solução de água do mar esterilizada com 0,5% de hipoclorito de sódio e 200 µL L-1 de detergente por 20 segundos. Para avaliar os efeitos da concentração de ágar, os segmentos axênicos foram inoculados em meio ASP 12-NTA com concentrações distintas de ágar que variaram de zero a 1%. A adição de ágar no meio inibiu o crescimento dos segmentos apicais de ambos os morfos, bem como o crescimento de segmentos intercalares do morfo verde. Observou-se uma tendência geral no crescimento dos explantes, onde a taxa de crescimento foi inversamente proporcional à concentração de ágar. A adição de ágar no meio induziu a formação de três tipos de calo, denominados conforme a região do explante onde se originaram: calo apical, calo basal e calo intermediário. As concentrações de 0,5% e 0,7% de ágar foram as concentrações ótimas para indução de calos basais e calos intermediários no morfo verde, respectivamente. A presença de ágar foi essencial para a formação de calos intermediários e apicais. Os resultados indicam que o ágar apresenta um papel na regulação dos processos morfogenéticos em morfos pigmentares de G. domingensis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação de Mestrado, Engenharia Zootécnica (Zootecnia), 27 de abril de 2015, Universidade dos Açores.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta dissertação teve como objetivo o estudo de uma central de climatização adiabática, que tem como finalidade controlar a temperatura e a humidade de um salão com equipamentos de torcedura e de bobinagem, pertencente à Continental - ITA. Foi realizado um levantamento de dados relativamente à temperatura e humidade interior e exterior do referido salão. Verificou-se que estes parâmetros não estavam dentro dos valores ótimos desejados, 26 ± 1˚C e 50 ± 5%, e por isso foi necessário estimar as necessidades nominais de arrefecimento. Este valor foi determinado a partir do Regulamento das Características de Comportamento Térmico dos Edifícios (RCCTE), obtendo-se o valor de 79 kWh/m2.˚C. No sentido de avaliar se as centrais de climatização instaladas no salão em estudo satisfaziam estas necessidades, calcularam-se as suas capacidades de arrefecimento obtendo-se um valor máximo de 64 kWh/m2.˚C. Paralelamente a este estudo, foi calculada a eficiência de humidificação para cada central nos meses de março e setembro. Os valores obtidos foram oscilantes obtendo-se um valor máximo de 100% em setembro. Este fato deve-se à temperatura exterior neste mês ser mais alta e, por consequência, a eficiência de humidificação da central é maior, pois a quantidade de água que o ar pode conter na sua composição é também mais elevada. Com o objetivo de colmatar a diferença entre as necessidades nominais de arrefecimento e a capacidade de arrefecimento das centrais, foram analisadas algumas soluções que, a serem implementadas, poderiam ajudar na poupança energética. Uma dessas soluções era a substituição do sistema atual de humidificação por um sistema mais eficiente de alta pressão. Com o estudo económico deste investimento obteve-se um período de retorno de dois anos. Foram ainda apresentados mais dois investimentos onde foi alterado o sistema de controlo automático existente, obtendo-se para um, dois anos de período de retorno e para o outro três anos e meio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper analyzes the performance of two cooperative robot manipulators. In order to capture the working performancewe formulated several performance indices that measure the manipulability, the effort reduction and the equilibrium between the two robots. In this perspective the proposed indices we determined the optimal values for the system parameters. Furthermore, it is studied the implementation of fractional-order algorithms in the position/force control of two cooperative robotic manipulators holding an object.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we formulate the electricity retailers’ short-term decision-making problem in a liberalized retail market as a multi-objective optimization model. Retailers with light physical assets, such as generation and storage units in the distribution network, are considered. Following advances in smart grid technologies, electricity retailers are becoming able to employ incentive-based demand response (DR) programs in addition to their physical assets to effectively manage the risks of market price and load variations. In this model, the DR scheduling is performed simultaneously with the dispatch of generation and storage units. The ultimate goal is to find the optimal values of the hourly financial incentives offered to the end-users. The proposed model considers the capacity obligations imposed on retailers by the grid operator. The profit seeking retailer also has the objective to minimize the peak demand to avoid the high capacity charges in form of grid tariffs or penalties. The non-dominated sorting genetic algorithm II (NSGA-II) is used to solve the multi-objective problem. It is a fast and elitist multi-objective evolutionary algorithm. A case study is solved to illustrate the efficient performance of the proposed methodology. Simulation results show the effectiveness of the model for designing the incentive-based DR programs and indicate the efficiency of NSGA-II in solving the retailers’ multi-objective problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação de mestrado Internacional em Sustentabilidade do Ambiente Construído

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We show that the dispersal routes reconstruction problem can be stated as an instance of a graph theoretical problem known as the minimum cost arborescence problem, for which there exist efficient algorithms. Furthermore, we derive some theoretical results, in a simplified setting, on the possible optimal values that can be obtained for this problem. With this, we place the dispersal routes reconstruction problem on solid theoretical grounds, establishing it as a tractable problem that also lends itself to formal mathematical and computational analysis. Finally, we present an insightful example of how this framework can be applied to real data. We propose that our computational method can be used to define the most parsimonious dispersal (or invasion) scenarios, which can then be tested using complementary methods such as genetic analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In distributed energy production, permanent magnet synchronous generators (PMSG) are often connected to the grid via frequency converters, such as voltage source line converters. The price of the converter may constitute a large part of the costs of a generating set. Some of the permanent magnet synchronous generators with converters and traditional separately excited synchronous generators couldbe replaced by direct-on-line (DOL) non-controlled PMSGs. Small directly networkconnected generators are likely to have large markets in the area of distributed electric energy generation. Typical prime movers could be windmills, watermills and internal combustion engines. DOL PMSGs could also be applied in island networks, such as ships and oil platforms. Also various back-up power generating systems could be carried out with DOL PMSGs. The benefits would be a lower priceof the generating set and the robustness and easy use of the system. The performance of DOL PMSGs is analyzed. The electricity distribution companies have regulations that constrain the design of the generators being connected to the grid. The general guidelines and recommendations are applied in the analysis. By analyzing the results produced by the simulation model for the permanent magnet machine, the guidelines for efficient damper winding parameters for DOL PMSGs are presented. The simulation model is used to simulate grid connections and load transients. The damper winding parameters are calculated by the finite element method (FEM) and determined from experimental measurements. Three-dimensional finite element analysis (3D FEA) is carried out. The results from the simulation model and 3D FEA are compared with practical measurements from two prototype axial flux permanent magnet generators provided with damper windings. The dimensioning of the damper winding parameters is case specific. The damper winding should be dimensioned based on the moment of inertia of the generating set. It is shown that the damper winding has optimal values to reach synchronous operation in the shortest period of time after transient operation. With optimal dimensioning, interferenceon the grid is minimized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työn tavoitteena oli selvittää jätteenpolton typenoksidipäästöjen puhdistusmahdollisuuksia. Työssä käydään läpi typen oksidien muodostuminen poltossa ja typen oksidien poistomenetelmät. Poistomenetelmiä käsiteltäessä painotus on arinapoltossa ja erityisesti selektiivisessä ei-katalyyttimenetelmässä (SNCR). Työn kokeellinen osa tehtiin Ekokem Oy Ab:n jätevoimalassa Riihimäellä. Kokeellisessa osassa selvitettiin ensin ammoniakkiveden massavirran, SNCR-laitteiston veden massavirran ja räjähdysnuohouksen vaikutusta typenoksidipitoisuuteen. Samalla selvitettiin muita typenoksidipitoisuuteen vaikuttavia tekijöitä sekä SNCR-laitteiston puhdistustehokkuus. Sen jälkeen selvitettiin parhaita toiminta-arvoja öljyisen veden massavirralle, SNCR-laitteiston massavirralle ja primääri- ja sekundääri-ilman suhteelle typenoksidipitoisuuden, ammoniakki-slip:n, ilokaasupitoisuuden, ammoniakkiveden kulutuksen ja höyryn tuotannon kannalta. Tulokseksi saatiin, että ammoniakkiveden massavirran lisääminen pienentää typenoksidipitoisuutta, mutta voi aiheuttaa ammoniakkipäästön. Paras SNCR-laitteiston veden massavirta on suurin tutkittu, 800 kg/h, jolloin typenoksidipitoisuus sekä typenoksidipitoisuuden hetkittäinen vaihtelu, ammoniakkiveden kulutus ja ammoniakkipäästö ovat pienimmät. Samalla tosin höyryn virtaama pienenee. SNCR-laitteiston puhdistustehokkuudeksi saatiin 60 %. Räjähdysnuohouksella ei ole havaittavaa, eikä öljyisen veden massavirralla merkittävää vaikutusta typenoksidipitoisuuteen. Ammoniakkiveden kulutuksen kannalta paras öljyisen veden määrä on 600 kg/h, kun taas ammoniakki-slip:n kannalta paras öljyisen veden määrä on 950 kg/h. Primääri-ilman osuuden pienentäminen pienentää ammoniakki-slip:iä ja ammoniakkiveden kulutusta.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study, the effects of hot-air drying conditions on color, water holding capacity, and total phenolic content of dried apple were investigated using artificial neural network as an intelligent modeling system. After that, a genetic algorithm was used to optimize the drying conditions. Apples were dried at different temperatures (40, 60, and 80 °C) and at three air flow-rates (0.5, 1, and 1.5 m/s). Applying the leave-one-out cross validation methodology, simulated and experimental data were in good agreement presenting an error < 2.4 %. Quality index optimal values were found at 62.9 °C and 1.0 m/s using genetic algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis we have developed a few inventory models in which items are served to the customers after a processing time. This leads to a queue of demand even when items are available. In chapter two we have discussed a problem involving search of orbital customers for providing inventory. Retrial of orbital customers was also considered in that chapter; in chapter 5 also we discussed retrial inventory model which is sans orbital search of customers. In the remaining chapters (3, 4 and 6) we did not consider retrial of customers, rather we assumed the waiting room capacity of the system to be arbitrarily large. Though the models in chapters 3 and 4 differ only in that in the former we consider positive lead time for replenishment of inventory and in the latter the same is assumed to be negligible, we arrived at sharper results in chapter 4. In chapter 6 we considered a production inventory model with production time distribution for a single item and that of service time of a customer following distinct Erlang distributions. We also introduced protection of production and service stages and investigated the optimal values of the number of stages to be protected.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.