931 resultados para Two-Level Optimization
Resumo:
Electricity markets are systems for effecting the purchase and sale of electricity using supply and demand to set energy prices. Two major market models are often distinguished: pools and bilateral contracts. Pool prices tend to change quickly and variations are usually highly unpredictable. In this way, market participants often enter into bilateral contracts to hedge against pool price volatility. This article addresses the challenge of optimizing the portfolio of clients managed by trader agents. Typically, traders buy energy in day-ahead markets and sell it to a set of target clients, by negotiating bilateral contracts involving three-rate tariffs. Traders sell energy by considering the prices of a reference week and five different types of clients. They analyze several tariffs and determine the best share of customers, i.e., the share that maximizes profit. © 2014 IEEE.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
This chapter considers the particle swarm optimization algorithm as a system, whose dynamics is studied from the point of view of fractional calculus. In this study some initial swarm particles are randomly changed, for the system stimulation, and its response is compared with a non-perturbed reference response. The perturbation effect in the PSO evolution is observed in the perspective of the fitness time behaviour of the best particle. The dynamics is represented through the median of a sample of experiments, while adopting the Fourier analysis for describing the phenomena. The influence upon the global dynamics is also analyzed. Two main issues are reported: the PSO dynamics when the system is subjected to random perturbations, and its modelling with fractional order transfer functions.
Resumo:
Introduction: multimodality environment; requirement for greater understanding of the imaging technologies used, the limitations of these technologies, and how to best interpret the results; dose optimization; introduction of new techniques; current practice and best practice; incidental findings, in low-dose CT images obtained as part of the hybrid imaging process, are an increasing phenomenon with advancing CT technology; resultant ethical and medico-legal dilemmas; understanding limitations of these procedures important when reporting images and recommending follow-up; free-response observer performance study was used to evaluate lesion detection in low-dose CT images obtained during attenuation correction acquisitions for myocardial perfusion imaging, on two hybrid imaging systems.
Resumo:
The increasing use of Carbon-Fibre Reinforced Plastic (CFRP) laminates in high responsibility applications introduces an issue regarding their handling after damage. The availability of efficient repair methods is essential to restore the strength of the structure. The availability of accurate predictive tools for the repairs behaviour is also essential for the reduction of costs and time associated to extensive tests. This work reports on a numerical study of the tensile behaviour of three-dimensional (3D) adhesively-bonded scarf repairs in CFRP structures, using a ductile adhesive. The Finite Element (FE) analysis was performed in ABAQUS® and Cohesive Zone Models (CZM’s) was used for the simulation of damage in the adhesive layer. A parametric study was performed on two geometric parameters. The use of overlaminating plies covering the repaired region at the outer or both repair surfaces was also tested as an attempt to increase the repairs efficiency. The results allowed the proposal of design principles for repairing CFRP structures.
Resumo:
The latest LHC data confirmed the existence of a Higgs-like particle and made interesting measurements on its decays into gamma gamma, ZZ*, WW*, tau(+)tau(-), and b (b) over bar. It is expected that a decay into Z gamma might be measured at the next LHC round, for which there already exists an upper bound. The Higgs-like particle could be a mixture of scalar with a relatively large component of pseudoscalar. We compute the decay of such a mixed state into Z gamma, and we study its properties in the context of the complex two Higgs doublet model, analysing the effect of the current measurements on the four versions of this model. We show that a measurement of the h -> Z gamma rate at a level consistent with the SM can be used to place interesting constraints on the pseudoscalar component. We also comment on the issue of a wrong sign Yukawa coupling for the bottom in Type II models.
Resumo:
This papers aims at providing a combined strategy for solving systems of equalities and inequalities. The combined strategy uses two types of steps: a global search step and a local search step. The global step relies on a tabu search heuristic and the local step uses a deterministic search known as Hooke and Jeeves. The choice of step, at each iteration, is based on the level of reduction of the l2-norm of the error function observed in the equivalent system of equations, compared with the previous iteration.
Resumo:
We present the modeling efforts on antenna design and frequency selection to monitor brain temperature during prolonged surgery using noninvasive microwave radiometry. A tapered log-spiral antenna design is chosen for its wideband characteristics that allow higher power collection from deep brain. Parametric analysis with the software HFSS is used to optimize antenna performance for deep brain temperature sensing. Radiometric antenna efficiency (eta) is evaluated in terms of the ratio of power collected from brain to total power received by the antenna. Anatomical information extracted from several adult computed tomography scans is used to establish design parameters for constructing an accurate layered 3-D tissue phantom. This head phantom includes separate brain and scalp regions, with tissue equivalent liquids circulating at independent temperatures on either side of an intact skull. The optimized frequency band is 1.1-1.6 GHz producing an average antenna efficiency of 50.3% from a two turn log-spiral antenna. The entire sensor package is contained in a lightweight and low-profile 2.8 cm diameter by 1.5 cm high assembly that can be held in place over the skin with an electromagnetic interference shielding adhesive patch. The calculated radiometric equivalent brain temperature tracks within 0.4 degrees C of the measured brain phantom temperature when the brain phantom is lowered 10. C and then returned to the original temperature (37 degrees C) over a 4.6-h experiment. The numerical and experimental results demonstrate that the optimized 2.5-cm log-spiral antenna is well suited for the noninvasive radiometric sensing of deep brain temperature.
Resumo:
In order to correctly assess the biaxial fatigue material properties one must experimentally test different load conditions and stress levels. With the rise of new in-plane biaxial fatigue testing machines, using smaller and more efficient electrical motors, instead of the conventional hydraulic machines, it is necessary to reduce the specimen size and to ensure that the specimen geometry is appropriated for the load capacity installed. At the present time there are no standard specimen’s geometries and the indications on literature how to design an efficient test specimen are insufficient. The main goal of this paper is to present the methodology on how to obtain an optimal cruciform specimen geometry, with thickness reduction in the gauge area, appropriated for fatigue crack initiation, as a function of the base material sheet thickness used to build the specimen. The geometry is optimized for maximum stress using several parameters, ensuring that in the gauge area the stress is uniform and maximum with two limit phase shift loading conditions. Therefore the fatigue damage will always initiate on the center of the specimen, avoiding failure outside this region. Using the Renard Series of preferred numbers for the base material sheet thickness as a reference, the reaming geometry parameters are optimized using a derivative-free methodology, called direct multi search (DMS) method. The final optimal geometry as a function of the base material sheet thickness is proposed, as a guide line for cruciform specimens design, and as a possible contribution for a future standard on in-plane biaxial fatigue tests. © 2014, Gruppo Italiano Frattura. All rights reserved.
Resumo:
We discuss theoretical and phenomenological aspects of two-Higgs-doublet extensions of the Standard Model. In general, these extensions have scalar mediated flavour changing neutral currents which are strongly constrained by experiment. Various strategies are discussed to control these flavour changing scalar currents and their phenomenological consequences are analysed. In particular, scenarios with natural flavour conservation are investigated, including the so-called type I and type II models as well as lepton-specific and inert models. Type III models are then discussed, where scalar flavour changing neutral currents are present at tree level, but are suppressed by either a specific ansatz for the Yukawa couplings or by the introduction of family symmetries leading to a natural suppression mechanism. We also consider the phenomenology of charged scalars in these models. Next we turn to the role of symmetries in the scalar sector. We discuss the six symmetry-constrained scalar potentials and their extension into the fermion sector. The vacuum structure of the scalar potential is analysed, including a study of the vacuum stability conditions on the potential and the renormalization-group improvement of these conditions is also presented. The stability of the tree level minimum of the scalar potential in connection with electric charge conservation and its behaviour under CP is analysed. The question of CP violation is addressed in detail, including the cases of explicit CP violation and spontaneous CP violation. We present a detailed study of weak basis invariants which are odd under CP. These invariants allow for the possibility of studying the CP properties of any two-Higgs-doublet model in an arbitrary Higgs basis. A careful study of spontaneous CP violation is presented, including an analysis of the conditions which have to be satisfied in order for a vacuum to violate CP. We present minimal models of CP violation where the vacuum phase is sufficient to generate a complex CKM matrix, which is at present a requirement for any realistic model of spontaneous CP violation.
Resumo:
As it is well known, competitive electricity markets require new computing tools for power companies that operate in retail markets in order to enhance the management of its energy resources. During the last years there has been an increase of the renewable penetration into the micro-generation which begins to co-exist with the other existing power generation, giving rise to a new type of consumers. This paper develops a methodology to be applied to the management of the all the aggregators. The aggregator establishes bilateral contracts with its clients where the energy purchased and selling conditions are negotiated not only in terms of prices but also for other conditions that allow more flexibility in the way generation and consumption is addressed. The aggregator agent needs a tool to support the decision making in order to compose and select its customers' portfolio in an optimal way, for a given level of profitability and risk.
Resumo:
In São Paulo, Brazil, between November 1980 and July 1982, 1614 newborns of middle socioeconomic background and 1156 newborns of low socioeconomic background were examined for the occurrence of congenital cytomegalovirus (CMV) infection by isolation of virus from urine samples or detection of specific anti-CMV IgM in umbilical cord serum tested by immunofluorescence. In the low socioeconomic population prevalence of CMV complement-fixing antibodies in mothers was 84.4%(151/179) and the incidence of congenital infection assessed by virus isolation 0.98% (5/508), as compared with 0.46% (3/648) in the group of newborns tested by detection of specific anti-CMV IgM in umbilical cord-serum. In middle socioeconomic level population prevalence of CMV complement-fixing antibodies in mothers was 66.5% (284/427) and the incidence of CMV congenital infection was 0.39% (2/518) in the group of newborns screened by virus isolation and 0.18% (2/1096) in the group tested by detection of specific anti-CMV IgM. In the present study none of the 12 congenitally infected newborns presented clinical apparent disease at birth.
Resumo:
To evaluate the prevalence of antibody against hepatitis A in two socioeconomically distinct populations of a developing country, 540 serum specimens from children and adults living in São Paulo, Brazil, were tested for IgG anti HAV by a commercial radioimunoassay (Havab, Abbott Laboratories). The prevalence of anti-HAV in low socioeconomic level subjects was 75.0% in children 2-11 years old and 100.0% in adults, whereas in middle socioeconomic level significantly lower prevalences were observed (40.3% in chidren 2-11 years old and 91.9% in adults). Voluntary blood donors of middle socioeconomic level showed a prevalence of 90.4%. These data suggest that hepatitis A infection remains a highly endemic disease in São Paulo, Brazil.
Resumo:
Até 2020, a Europa terá de reduzir 20% das suas emissões de gases com efeito de estufa, 20% da produção de energia terá de ser proveniente de fontes renováveis e a eficiência energética deverá aumentar 20%. Estas são as metas apresentadas pela União Europeia, que ficaram conhecidas por 20/20/20 [1]. A Refinaria de Matosinhosé um complexo industrial que opera no sector da refinação e que apresenta preocupações ao nível da eficiência energética e dos aspectos ambientais subjacentes. No âmbito da racionalização energética das refinarias, a Galp Energia tem vindo a implementar um conjunto de medidas, adoptando as melhores tecnologias disponíveis com o objectivo de diminuir os consumos de energia, promover a eficiência energética e reduzir as emissões de dióxido de carbono. Para ir de encontro a estas medidas foi elaborado um estudo comparativo que permitiu à empresa definir as medidas consideradas prioritárias. Uma solução encontrada visa a execução de projectos que não requerem investimento e que têm acções imediatas, tais como o aumento da eficiência energética das fornalhas [1]. Este trabalho realizado na Galp Energia S.A. teve como objectivo principal a optimização energética da Unidade de Desalfatação do Propano da Fábrica de Óleos Base. Esta optimização baseou-se no aproveitamento energético da corrente de fundo da coluna de rectificação T2003C com uma potência calorífica de 2,79 Gcal/h. Após levantamento de todas as variáveis do processo relativas a esta unidade, especialmente a potência calorífica das correntes envolvidas chegou-se á conclusão que a fornalha H2101 poderá ser substituída por dois permutadores, reduzindo desta forma os consumos energéticos. Pois a corrente de fundo da coluna T2003 com uma potência calorífica 2,79 Gcal/h poderá permutar calor com a corrente da mistura asfalto com propano, fazendo com que esta atinja temperatura superior à obtida com a fornalha em funcionamento. A análise económica ao consumo e respectivo custo do fuelóleo na fornalha para o período de um ano foi realizada, sendo o seu custo de combustível de 611.396,00 €. O valor da aquisição dos permutadores é 86.355,97€, sendo rentável a alteração proposta neste projecto.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica