966 resultados para Distributed processing
Resumo:
Sustainable development concerns made renewable energy sources to be increasingly used for electricity distributed generation. However, this is mainly due to incentives or mandatory targets determined by energy policies as in European Union. Assuring a sustainable future requires distributed generation to be able to participate in competitive electricity markets. To get more negotiation power in the market and to get advantages of scale economy, distributed generators can be aggregated giving place to a new concept: the Virtual Power Producer (VPP). VPPs are multi-technology and multisite heterogeneous entities that should adopt organization and management methodologies so that they can make distributed generation a really profitable activity, able to participate in the market. This paper presents ViProd, a simulation tool that allows simulating VPPs operation, in the context of MASCEM, a multi-agent based eletricity market simulator.
Resumo:
Power Systems (PS), have been affected by substantial penetration of Distributed Generation (DG) and the operation in competitive environments. The future PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage means, and provide to market agents the means to ensure a flexible and secure operation. Virtual power players (VPP) can aggregate a diversity of players, namely generators and consumers, and a diversity of energy resources, including electricity generation based on several technologies, storage and demand response. This paper proposes an artificial neural network (ANN) based methodology to support VPP resource schedule. The trained network is able to achieve good schedule results requiring modest computational means. A real data test case is presented.
Resumo:
The smart grid concept is rapidly evolving in the direction of practical implementations able to bring smart grid advantages into practice. Evolution in legacy equipment and infrastructures is not sufficient to accomplish the smart grid goals as it does not consider the needs of the players operating in a complex environment which is dynamic and competitive in nature. Artificial intelligence based applications can provide solutions to these problems, supporting decentralized intelligence and decision-making. A case study illustrates the importance of Virtual Power Players (VPP) and multi-player negotiation in the context of smart grids. This case study is based on real data and aims at optimizing energy resource management, considering generation, storage and demand response.
Resumo:
In the energy management of a small power system, the scheduling of the generation units is a crucial problem for which adequate methodologies can maximize the performance of the energy supply. This paper proposes an innovative methodology for distributed energy resources management. The optimal operation of distributed generation, demand response and storage resources is formulated as a mixed-integer linear programming model (MILP) and solved by a deterministic optimization technique CPLEX-based implemented in General Algebraic Modeling Systems (GAMS). The paper deals with a vision for the grids of the future, focusing on conceptual and operational aspects of electrical grids characterized by an intensive penetration of DG, in the scope of competitive environments and using artificial intelligence methodologies to attain the envisaged goals. These concepts are implemented in a computational framework which includes both grid and market simulation.
Resumo:
Demand response can play a very relevant role in future power systems in which distributed generation can help to assure service continuity in some fault situations. This paper deals with the demand response concept and discusses its use in the context of competitive electricity markets and intensive use of distributed generation. The paper presents DemSi, a demand response simulator that allows studying demand response actions and schemes using a realistic network simulation based on PSCAD. Demand response opportunities are used in an optimized way considering flexible contracts between consumers and suppliers. A case study evidences the advantages of using flexible contracts and optimizing the available generation when there is a lack of supply.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
This paper presents a distributed model predictive control (DMPC) for indoor thermal comfort that simultaneously optimizes the consumption of a limited shared energy resource. The control objective of each subsystem is to minimize the heating/cooling energy cost while maintaining the indoor temperature and used power inside bounds. In a distributed coordinated environment, the control uses multiple dynamically decoupled agents (one for each subsystem/house) aiming to achieve satisfaction of coupling constraints. According to the hourly power demand profile, each house assigns a priority level that indicates how much is willing to bid in auction for consume the limited clean resource. This procedure allows the bidding value vary hourly and consequently, the agents order to access to the clean energy also varies. Despite of power constraints, all houses have also thermal comfort constraints that must be fulfilled. The system is simulated with several houses in a distributed environment.
Resumo:
Amorphous SiC tandem heterostructures are used to filter a specific band, in the visible range. Experimental and simulated results are compared to validate the use of SiC multilayered structures in applications where gain compensation is needed or to attenuate unwanted wavelengths. Spectral response data acquired under different frequencies, optical wavelength control and side irradiations are analyzed. Transfer function characteristics are discussed. Color pulsed communication channels are transmitted together and the output signal analyzed under different background conditions. Results show that under controlled wavelength backgrounds, the device sensitivity is enhanced in a precise wavelength range and quenched in the others, tuning or suppressing a specific band. Depending on the background wavelength and irradiation side, the device acts either as a long-, a short-, or a band-rejection pass filter. An optoelectronic model supports the experimental results and gives insight on the physics of the device.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Red, green and blue optical signals were directed to an a-SiC:H multilayered device, each one with a specific transmission rate. The combined optical signal was analyzed by reading out, under different applied voltages, the generated photocurrent. Results show that when a chromatic time dependent wavelength combination with different transmission rates irradiates the multilayered structure, the device operates as a tunable wavelength filter and can be used in wavelength division multiplexing systems for short range communications. An application to fluorescent proteins detection is presented. (C) 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Resumo:
Processes are a central entity in enterprise collaboration. Collaborative processes need to be executed and coordinated in a distributed Computational platform where computers are connected through heterogeneous networks and systems. Life cycle management of such collaborative processes requires a framework able to handle their diversity based on different computational and communication requirements. This paper proposes a rational for such framework, points out key requirements and proposes it strategy for a supporting technological infrastructure. Beyond the portability of collaborative process definitions among different technological bindings, a framework to handle different life cycle phases of those definitions is presented and discussed. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The recent advances in embedded systems world, lead us to more complex systems with application specific blocks (IP cores), the System on Chip (SoC) devices. A good example of these complex devices can be encountered in the cell phones that can have image processing cores, communication cores, memory card cores, and others. The need of augmenting systems’ processing performance with lowest power, leads to a concept of Multiprocessor System on Chip (MSoC) in which the execution of multiple tasks can be distributed along various processors. This thesis intends to address the creation of a synthesizable multiprocessing system to be placed in a FPGA device, providing a good flexibility to tailor the system to a specific application. To deliver a multiprocessing system, will be used the synthesisable 32-bit SPARC V8 compliant, LEON3 processor.