974 resultados para simulation-optimization
Resumo:
Involving groups in important management processes such as decision making has several advantages. By discussing and combining ideas, counter ideas, critical opinions, identified constraints, and alternatives, a group of individuals can test potentially better solutions, sometimes in the form of new products, services, and plans. In the past few decades, operations research, AI, and computer science have had tremendous success creating software systems that can achieve optimal solutions, even for complex problems. The only drawback is that people don’t always agree with these solutions. Sometimes this dissatisfaction is due to an incorrect parameterization of the problem. Nevertheless, the reasons people don’t like a solution might not be quantifiable, because those reasons are often based on aspects such as emotion, mood, and personality. At the same time, monolithic individual decisionsupport systems centered on optimizing solutions are being replaced by collaborative systems and group decision-support systems (GDSSs) that focus more on establishing connections between people in organizations. These systems follow a kind of social paradigm. Combining both optimization- and socialcentered approaches is a topic of current research. However, even if such a hybrid approach can be developed, it will still miss an essential point: the emotional nature of group participants in decision-making tasks. We’ve developed a context-aware emotion based model to design intelligent agents for group decision-making processes. To evaluate this model, we’ve incorporated it in an agent-based simulator called ABS4GD (Agent-Based Simulation for Group Decision), which we developed. This multiagent simulator considers emotion- and argument based factors while supporting group decision-making processes. Experiments show that agents endowed with emotional awareness achieve agreements more quickly than those without such awareness. Hence, participant agents that integrate emotional factors in their judgments can be more successful because, in exchanging arguments with other agents, they consider the emotional nature of group decision making.
Resumo:
Group decision making plays an important role in today’s organisations. The impact of decision making is so high and complex, that rarely the decision making process is made just by one individual. The simulation of group decision making through a Multi-Agent System is a very interesting research topic. The purpose of this paper it to specify the actors involved in the simulation of a group decision, to present a model to the process of group formation and to describe the approach made to implement that model. In the group formation model it is considered the existence of incomplete and negative information, which was identified as crucial to make the simulation closer to the reality.
Resumo:
Mestrado em Radioterapia.
Resumo:
As it is well known, competitive electricity markets require new computing tools for power companies that operate in retail markets in order to enhance the management of its energy resources. During the last years there has been an increase of the renewable penetration into the micro-generation which begins to co-exist with the other existing power generation, giving rise to a new type of consumers. This paper develops a methodology to be applied to the management of the all the aggregators. The aggregator establishes bilateral contracts with its clients where the energy purchased and selling conditions are negotiated not only in terms of prices but also for other conditions that allow more flexibility in the way generation and consumption is addressed. The aggregator agent needs a tool to support the decision making in order to compose and select its customers' portfolio in an optimal way, for a given level of profitability and risk.
Resumo:
This paper is a contribution for the assessment and comparison of magnet properties based on magnetic field characteristics particularly concerning the magnetic induction uniformity in the air gaps. For this aim, a solver was developed and implemented to determine the magnetic field of a magnetic core to be used in Fast Field Cycling (FFC) Nuclear Magnetic Resonance (NMR) relaxometry. The electromagnetic field computation is based on a 2D finite-element method (FEM) using both the scalar and the vector potential formulation. Results for the magnetic field lines and the magnetic induction vector in the air gap are presented. The target magnetic induction is 0.2 T, which is a typical requirement of the FFC NMR technique, which can be achieved with a magnetic core based on permanent magnets or coils. In addition, this application requires high magnetic induction uniformity. To achieve this goal, a solution including superconducting pieces is analyzed. Results are compared with a different FEM program.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.
Resumo:
In this work we solve Mathematical Programs with Complementarity Constraints using the hyperbolic smoothing strategy. Under this approach, the complementarity condition is relaxed through the use of the hyperbolic smoothing function, involving a positive parameter that can be decreased to zero. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
In this paper a solution to an highly constrained and non-convex economical dispatch (ED) problem with a meta-heuristic technique named Sensing Cloud Optimization (SCO) is presented. The proposed meta-heuristic is based on a cloud of particles whose central point represents the objective function value and the remaining particles act as sensors "to fill" the search space and "guide" the central particle so it moves into the best direction. To demonstrate its performance, a case study with multi-fuel units and valve- point effects is presented.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Mestrado em Engenharia Informática
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
We have developed a new method for single-drop microextraction (SDME) for the preconcentration of organochlorine pesticides (OCP) from complex matrices. It is based on the use of a silicone ring at the tip of the syringe. A 5 μL drop of n-hexane is applied to an aqueous extract containing the OCP and found to be adequate to preconcentrate the OCPs prior to analysis by GC in combination with tandem mass spectrometry. Fourteen OCP were determined using this technique in combination with programmable temperature vaporization. It is shown to have many advantages over traditional split/splitless injection. The effects of kind of organic solvent, exposure time, agitation and organic drop volume were optimized. Relative recoveries range from 59 to 117 %, with repeatabilities of <15 % (coefficient of variation) were achieved. The limits of detection range from 0.002 to 0.150 μg kg−1. The method was applied to the preconcentration of OCPs in fresh strawberry, strawberry jam, and soil.
Resumo:
A QuEChERS method has been developed for the determination of 14 organochlorine pesticides in 14 soils from different Portuguese regions with wide range composition. The extracts were analysed by GC-ECD (where GC-ECD is gas chromatography-electron-capture detector) and confirmed by GC-MS/MS (where MS/MS is tandem mass spectrometry). The organic matter content is a key factor in the process efficiency. An optimization was carried out according to soils organic carbon level, divided in two groups: HS (organic carbon>2.3%) and LS (organic carbon<2.3%). Themethod was validated through linearity, recovery, precision and accuracy studies. The quantification was carried out using a matrixmatched calibration to minimize the existence of the matrix effect. Acceptable recoveries were obtained (70–120%) with a relative standard deviation of ≤16% for the three levels of contamination. The ranges of the limits of detection and of the limits of quantification in soils HS were from 3.42 to 23.77 μg kg−1 and from 11.41 to 79.23 μg kg−1, respectively. For LS soils, the limits of detection ranged from 6.11 to 14.78 μg kg−1 and the limits of quantification from 20.37 to 49.27 μg kg−1. In the 14 collected soil samples only one showed a residue of dieldrin (45.36 μg kg−1) above the limit of quantification. This methodology combines the advantages of QuEChERS, GC-ECD detection and GC-MS/MS confirmation producing a very rapid, sensitive and reliable procedure which can be applied in routine analytical laboratories.
Resumo:
It is presented in this paper a study on the photo-electronic properties of multi layer a-Si: H/a-SiC: H p-i-n-i-p structures. This study is aimed to give an insight into the internal electrical characteristics of such a structure in thermal equilibrium, under applied Was and under different illumination condition. Taking advantage of this insight it is possible to establish a relation among-the electrical behavior of the structure the structure geometry (i.e. thickness of the light absorbing intrinsic layers and of the internal n-layer) and the composition of the layers (i.e. optical bandgap controlled through percentage of carbon dilution in the a-Si1-xCx: H layers). Showing an optical gain for low incident light power controllable by means of externally applied bias or structure composition, these structures are quite attractive for photo-sensing device applications, like color sensors and large area color image detector. An analysis based on numerical ASCA simulations is presented for describing the behavior of different configurations of the device and compared with experimental measurements (spectral response and current-voltage characteristic). (c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.