31 resultados para Pareto-Optimal
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
In a decentralized setting the game-theoretical predictions are that only strong blockings are allowed to rupture the structure of a matching. This paper argues that, under indifferences, also weak blockings should be considered when these blockings come from the grand coalition. This solution concept requires stability plus Pareto optimality. A characterization of the set of Pareto-stable matchings for the roommate and the marriage models is provided in terms of individually rational matchings whose blocking pairs, if any, are formed with unmatched agents. These matchings always exist and give an economic intuition on how blocking can be done by non-trading agents, so that the transactions need not be undone as agents reach the set of stable matchings. Some properties of the Pareto-stable matchings shared by the Marriage and Roommate models are obtained.
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Um evento extremo de precipitação ocorreu na primeira semana do ano 2000, de 1º a 5 de janeiro, no Vale do Paraíba, parte leste do Estado de São Paulo, Brasil, causando enorme impacto socioeconômico, com mortes e destruição. Este trabalho estudou este evento em 10 estações meteorológicas selecionadas que foram consideradas como aquelas tendo dados mais homogêneos do Que outras estações na região. O modelo de distribuição generalizada de Pareto (DGP) para valores extremos de precipitação de 5 dias foi desenvolvido, individualmente para cada uma dessas estações. Na modelagem da DGP, foi adotada abordagem não-estacionaria considerando o ciclo anual e tendência de longo prazo como co-variaveis. Uma conclusão desta investigação é que as quantidades de precipitação acumulada durante os 5 dias do evento estudado podem ser classificadas como extremamente raras para a região, com probabilidade de ocorrência menor do que 1% para maioria das estações, e menor do que 0,1% em três estações.
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
This work clarifies the relation between network circuit (topology) and behaviour (information transmission and synchronization) in active networks, e.g. neural networks. As an application, we show how one can find network topologies that are able to transmit a large amount of information, possess a large number of communication channels, and are robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.
Resumo:
The optimal discrimination of nonorthogonal quantum states with minimum error probability is a fundamental task in quantum measurement theory as well as an important primitive in optical communication. In this work, we propose and experimentally realize a new and simple quantum measurement strategy capable of discriminating two coherent states with smaller error probabilities than can be obtained using the standard measurement devices: the Kennedy receiver and the homodyne receiver.
Resumo:
It is widely assumed that optimal timing of larval release is of major importance to offspring survival, but the extent to which environmental factors entrain synchronous reproductive rhythms in natural populations is not well known. We sampled the broods of ovigerous females of the common shore crab Pachygrapsus transversus at both sheltered and exposed rocky shores interspersed along a so-km coastline, during four different periods, to better assess inter-population differences of larval release timing and to test for the effect of wave action. Shore-specific patterns were consistent through time. Maximum release fell within 1 day around syzygies on all shores, which matched dates of maximum tidal amplitude. Within this very narrow range, populations at exposed shores anticipated hatching compared to those at sheltered areas, possibly due to mechanical stimulation by wave action. Average departures from syzygial release ranged consistently among shores from 2.4 to 3.3 days, but in this case we found no evidence for the effect of wave exposure. Therefore, processes varying at the scale of a few kilometres affect the precision of semilunar timing and may produce differences in the survival of recently hatched larvae. Understanding the underlying mechanisms causing departures from presumed optimal release timing is thus important for a more comprehensive evaluation of reproductive success of invertebrate populations.
Resumo:
The conditions for maximization of the enzymatic activity of lipase entrapped in sol-gel matrix were determined for different vegetable oils using an experimental design. The effects of pH, temperature, and biocatalyst loading on lipase activity were verified using a central composite experimental design leading to a set of 13 assays and the surface response analysis. For canola oil and entrapped lipase, statistical analyses showed significant effects for pH and temperature and also the interactions between pH and temperature and temperature and biocatalyst loading. For the olive oil and entrapped lipase, it was verified that the pH was the only variable statistically significant. This study demonstrated that response surface analysis is a methodology appropriate for the maximization of the percentage of hydrolysis, as a function of pH, temperature, and lipase loading.
Resumo:
Cheese whey powder (CWP) is an attractive raw material for ethanol production since it is a dried and concentrated form of CW and contains lactose in addition to nitrogen, phosphate and other essential nutrients. In the present work, deproteinized CWP was utilized as fermentation medium for ethanol production by Kluyveromyces fragilis. The individual and combined effects of initial lactose concentration (50-150 kg m(-3)), temperature (25-35 degrees C) and inoculum concentration (1-3 kg m(-3)) were investigated through a 2(3) full-factorial central composite design, and the optimal conditions for maximizing the ethanol production were determined. According to the statistical analysis, in the studied range of values, only the initial lactose concentration had a significant effect on ethanol production, resulting in higher product formation as the initial substrate concentration was increased. Assays with initial lactose concentration varying from 150 to 250 kg m(-3) were thus performed and revealed that the use of 200 kg m(-3) initial lactose concentration, inoculum concentration of 1 kg m(-3) and temperature of 35 degrees C were the best conditions for maximizing the ethanol production from CWP solution. Under these conditions, 80.95 kg m(-3) of ethanol was obtained after 44 h of fermentation. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Enzyme production is a growing field in biotechnology and increasing attention has been devoted to the solid-state fermentation (SSF) of lignocellulosic biomass for production of industrially relevant lignocellulose deconstruction enzymes, especially manganese-peroxidase (MnP), which plays a crucial role in lignin degradation. However, there is a scarcity of studies regarding extraction of the secreted metabolities that are commonly bound to the fermented solids, preventing their accurate detection and limiting recovery efficiency. In the present work, we assessed the effectiveness of extraction process variables (pH, stirring rate, temperature, and extraction time) on recovery efficiency of manganese-peroxidase (MnP) obtained by SSF of eucalyptus residues using Lentinula edodes using statistical design of experiments. The results from this study indicated that of the variables studied, pH was the most significant (p < 0.05%) parameter affecting MnP recovery yield, while temperature, extraction time, and stirring rate presented no statistically significant effects in the studied range. The optimum pH for extraction of MnP was at 4.0-5.0, which yielded 1500-1700 IU kg (1) of enzyme activity at extraction time 4-5 h, under static condition at room temperature. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work explores the design of piezoelectric transducers based on functional material gradation, here named functionally graded piezoelectric transducer (FGPT). Depending on the applications, FGPTs must achieve several goals, which are essentially related to the transducer resonance frequency, vibration modes, and excitation strength at specific resonance frequencies. Several approaches can be used to achieve these goals; however, this work focuses on finding the optimal material gradation of FGPTs by means of topology optimization. Three objective functions are proposed: (i) to obtain the FGPT optimal material gradation for maximizing specified resonance frequencies; (ii) to design piezoelectric resonators, thus, the optimal material gradation is found for achieving desirable eigenvalues and eigenmodes; and (iii) to find the optimal material distribution of FGPTs, which maximizes specified excitation strength. To track the desirable vibration mode, a mode-tracking method utilizing the `modal assurance criterion` is applied. The continuous change of piezoelectric, dielectric, and elastic properties is achieved by using the graded finite element concept. The optimization algorithm is constructed based on sequential linear programming, and the concept of continuum approximation of material distribution. To illustrate the method, 2D FGPTs are designed for each objective function. In addition, the FGPT performance is compared with the non-FGPT one.
Resumo:
The computational design of a composite where the properties of its constituents change gradually within a unit cell can be successfully achieved by means of a material design method that combines topology optimization with homogenization. This is an iterative numerical method, which leads to changes in the composite material unit cell until desired properties (or performance) are obtained. Such method has been applied to several types of materials in the last few years. In this work, the objective is to extend the material design method to obtain functionally graded material architectures, i.e. materials that are graded at the local level (e.g. microstructural level). Consistent with this goal, a continuum distribution of the design variable inside the finite element domain is considered to represent a fully continuous material variation during the design process. Thus the topology optimization naturally leads to a smoothly graded material system. To illustrate the theoretical and numerical approaches, numerical examples are provided. The homogenization method is verified by considering one-dimensional material gradation profiles for which analytical solutions for the effective elastic properties are available. The verification of the homogenization method is extended to two dimensions considering a trigonometric material gradation, and a material variation with discontinuous derivatives. These are also used as benchmark examples to verify the optimization method for functionally graded material cell design. Finally the influence of material gradation on extreme materials is investigated, which includes materials with near-zero shear modulus, and materials with negative Poisson`s ratio.