979 resultados para Distributed Simulation
Resumo:
The smart grid concept is rapidly evolving in the direction of practical implementations able to bring smart grid advantages into practice. Evolution in legacy equipment and infrastructures is not sufficient to accomplish the smart grid goals as it does not consider the needs of the players operating in a complex environment which is dynamic and competitive in nature. Artificial intelligence based applications can provide solutions to these problems, supporting decentralized intelligence and decision-making. A case study illustrates the importance of Virtual Power Players (VPP) and multi-player negotiation in the context of smart grids. This case study is based on real data and aims at optimizing energy resource management, considering generation, storage and demand response.
Resumo:
Nowadays, there is a growing environmental concern about were the energy that we use comes from, bringing the att ention on renewable energies. However, the use and trade of renewable e nergies in the market seem to be complicated because of the lack of guara ntees of generation, mainly in the wind farms. The lack of guarantees is usually addressed by using a reserve generation. The aggregation of DG p lants gives place to a new concept: the Virtual Power Producer (VPP). VPPs can reinforce the importance of wind generation technologies, making them valuable in electricity markets. This paper presents some resul ts obtained with a simulation tool (ViProd) developed to support VPPs in the analysis of their operation and management methods and of their strat egies effects.
Resumo:
Adequate decision support tools are required by electricity market players operating in a liberalized environment, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services (AS) represent a good negotiation opportunity that must be considered by market players. Based on the ancillary services forecasting, market participants can use strategic bidding for day-ahead ancillary services markets. For this reason, ancillary services market simulation is being included in MASCEM, a multi-agent based electricity market simulator that can be used by market players to test and enhance their bidding strategies. The paper presents the methodology used to undertake ancillary services forecasting, based on an Artificial Neural Network (ANN) approach. ANNs are used to day-ahead prediction of non-spinning reserve (NS), regulation-up (RU), and regulation down (RD). Spinning reserve (SR) is mentioned as past work for comparative analysis. A case study based on California ISO (CAISO) data is included; the forecasted results are presented and compared with CAISO published forecast.
Resumo:
Electricity market players operating in a liberalized environment requires access to an adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tool must include ancillary market simulation. This paper proposes two different methods (Linear Programming and Genetic Algorithm approaches) for ancillary services dispatch. The methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case based on California Independent System Operator (CAISO) data concerning the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is included in this paper.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
Group decision making plays an important role in today’s organisations. The impact of decision making is so high and complex, that rarely the decision making process is made individually. In Group Decision Argumentation, there is a set of participants, with different profiles and expertise levels, that exchange ideas or engage in a process of argumentation and counter-argumentation, negotiate, cooperate, collaborate or even discuss techniques and/or methodologies for problem solving. In this paper, it is proposed a Multi-Agent simulator for the behaviour representation of group members in a decision making process. Agents behave depending on rational and emotional intelligence and use persuasive argumentation to convince and make alternative choices.
Resumo:
In this paper is proposed the integration of personality, emotion and mood aspects for a group of participants in a decision-making negotiation process. The aim is to simulate the participant behavior in that scenario. The personality is modeled through the OCEAN five-factor model of personality (Openness, Conscientiousness, Extraversion, Agreeableness and Negative emotionality). The emotion model applied to the participants is the OCC (Ortony, Clore and Collins) that defines several criteria representing the human emotional structure. In order to integrate personality and emotion is used the pleasure-arousal-dominance (PAD) model of mood.
Resumo:
Group decision making plays an important role in today’s organisations. The impact of decision making is so high and complex, that rarely the decision making process is made just by one individual. The simulation of group decision making through a Multi-Agent System is a very interesting research topic. The purpose of this paper it to specify the actors involved in the simulation of a group decision, to present a model to the process of group formation and to describe the approach made to implement that model. In the group formation model it is considered the existence of incomplete and negative information, which was identified as crucial to make the simulation closer to the reality.
Resumo:
This paper is a contribution for the assessment and comparison of magnet properties based on magnetic field characteristics particularly concerning the magnetic induction uniformity in the air gaps. For this aim, a solver was developed and implemented to determine the magnetic field of a magnetic core to be used in Fast Field Cycling (FFC) Nuclear Magnetic Resonance (NMR) relaxometry. The electromagnetic field computation is based on a 2D finite-element method (FEM) using both the scalar and the vector potential formulation. Results for the magnetic field lines and the magnetic induction vector in the air gap are presented. The target magnetic induction is 0.2 T, which is a typical requirement of the FFC NMR technique, which can be achieved with a magnetic core based on permanent magnets or coils. In addition, this application requires high magnetic induction uniformity. To achieve this goal, a solution including superconducting pieces is analyzed. Results are compared with a different FEM program.
Resumo:
This paper presents a distributed model predictive control (DMPC) for indoor thermal comfort that simultaneously optimizes the consumption of a limited shared energy resource. The control objective of each subsystem is to minimize the heating/cooling energy cost while maintaining the indoor temperature and used power inside bounds. In a distributed coordinated environment, the control uses multiple dynamically decoupled agents (one for each subsystem/house) aiming to achieve satisfaction of coupling constraints. According to the hourly power demand profile, each house assigns a priority level that indicates how much is willing to bid in auction for consume the limited clean resource. This procedure allows the bidding value vary hourly and consequently, the agents order to access to the clean energy also varies. Despite of power constraints, all houses have also thermal comfort constraints that must be fulfilled. The system is simulated with several houses in a distributed environment.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
It is presented in this paper a study on the photo-electronic properties of multi layer a-Si: H/a-SiC: H p-i-n-i-p structures. This study is aimed to give an insight into the internal electrical characteristics of such a structure in thermal equilibrium, under applied Was and under different illumination condition. Taking advantage of this insight it is possible to establish a relation among-the electrical behavior of the structure the structure geometry (i.e. thickness of the light absorbing intrinsic layers and of the internal n-layer) and the composition of the layers (i.e. optical bandgap controlled through percentage of carbon dilution in the a-Si1-xCx: H layers). Showing an optical gain for low incident light power controllable by means of externally applied bias or structure composition, these structures are quite attractive for photo-sensing device applications, like color sensors and large area color image detector. An analysis based on numerical ASCA simulations is presented for describing the behavior of different configurations of the device and compared with experimental measurements (spectral response and current-voltage characteristic). (c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
Objective - To describe and validate the simulation of the basic features of GE Millennium MG gamma camera using the GATE Monte Carlo platform. Material and methods - Crystal size and thickness, parallel-hole collimation and a realistic energy acquisition window were simulated in the GATE platform. GATE results were compared to experimental data in the following imaging conditions: a point source of 99mTc at different positions during static imaging and tomographic acquisitions using two different energy windows. The accuracy between the events expected and detected by simulation was obtained with the Mann–Whitney–Wilcoxon test. Comparisons were made regarding the measurement of sensitivity and spatial resolution, static and tomographic. Simulated and experimental spatial resolutions for tomographic data were compared with the Kruskal–Wallis test to assess simulation accuracy for this parameter. Results - There was good agreement between simulated and experimental data. The number of decays expected when compared with the number of decays registered, showed small deviation (≤0.007%). The sensitivity comparisons between static acquisitions for different distances from source to collimator (1, 5, 10, 20, 30cm) with energy windows of 126–154 keV and 130–158 keV showed differences of 4.4%, 5.5%, 4.2%, 5.5%, 4.5% and 5.4%, 6.3%, 6.3%, 5.8%, 5.3%, respectively. For the tomographic acquisitions, the mean differences were 7.5% and 9.8% for the energy window 126–154 keV and 130–158 keV. Comparison of simulated and experimental spatial resolutions for tomographic data showed no statistically significant differences with 95% confidence interval. Conclusions - Adequate simulation of the system basic features using GATE Monte Carlo simulation platform was achieved and validated.