969 resultados para distributed generation (DG)
Resumo:
The future scenarios for operation of smart grids are likely to include a large diversity of players, of different types and sizes. With control and decision making being decentralized over the network, intelligence should also be decentralized so that every player is able to play in the market environment. In the new context, aggregator players, enabling medium, small, and even micro size players to act in a competitive environment, will be very relevant. Virtual Power Players (VPP) and single players must optimize their energy resource management in order to accomplish their goals. This is relatively easy to larger players, with financial means to have access to adequate decision support tools, to support decision making concerning their optimal resource schedule. However, the smaller players have difficulties in accessing this kind of tools. So, it is required that these smaller players can be offered alternative methods to support their decisions. This paper presents a methodology, based on Artificial Neural Networks (ANN), intended to support smaller players’ resource scheduling. The used methodology uses a training set that is built using the energy resource scheduling solutions obtained with a reference optimization methodology, a mixed-integer non-linear programming (MINLP) in this case. The trained network is able to achieve good schedule results requiring modest computational means.
Resumo:
A multilevel negotiation mechanism for operating smart grids and negotiating in electricity markets considers the advantages of virtual power player management.
Resumo:
During the past 15 years, emergence and dissemination of third-generation cephalosporins resistance in nosocomial Enterobacteriaceae became a serious problem worldwide, due to the production of extended-spectrum-β-lactamases (ESBLs). The aim of this study was to investigate among the presence of ESBL-producing enterobacteria among Portuguese clinical isolates nearby Spain, to investigate the antimicrobial susceptibility patterns and to compare the two countries. The β-lactamases genes, blaTEM, blaSHV and blaCTX-M were detected by molecular methods. Among the ESBL-producing isolates it was found extraordinary levels (98.9%) of resistance to the fourth-generation cephalosporin Cefepime. These findings point to the need of reevaluate the definition of ESBL.
Resumo:
This paper consist in the establishment of a Virtual Producer/Consumer Agent (VPCA) in order to optimize the integrated management of distributed energy resources and to improve and control Demand Side Management DSM) and its aggregated loads. The paper presents the VPCA architecture and the proposed function-based organization to be used in order to coordinate the several generation technologies, the different load types and storage systems. This VPCA organization uses a frame work based on data mining techniques to characterize the costumers. The paper includes results of several experimental tests cases, using real data and taking into account electricity generation resources as well as consumption data.
Resumo:
In the last years there has been a considerable increase in the number of people in need of intensive care, especially among the elderly, a phenomenon that is related to population ageing (Brown 2003). However, this is not exclusive of the elderly, as diseases as obesity, diabetes, and blood pressure have been increasing among young adults (Ford and Capewell 2007). As a new fact, it has to be dealt with by the healthcare sector, and particularly by the public one. Thus, the importance of finding new and cost effective ways for healthcare delivery are of particular importance, especially when the patients are not to be detached from their environments (WHO 2004). Following this line of thinking, a VirtualECare Multiagent System is presented in section 2, being our efforts centered on its Group Decision modules (Costa, Neves et al. 2007) (Camarinha-Matos and Afsarmanesh 2001).On the other hand, there has been a growing interest in combining the technological advances in the information society - computing, telecommunications and knowledge – in order to create new methodologies for problem solving, namely those that convey on Group Decision Support Systems (GDSS), based on agent perception. Indeed, the new economy, along with increased competition in today’s complex business environments, takes the companies to seek complementarities, in order to increase competitiveness and reduce risks. Under these scenarios, planning takes a major role in a company life cycle. However, effective planning depends on the generation and analysis of ideas (innovative or not) and, as a result, the idea generation and management processes are crucial. Our objective is to apply the GDSS referred to above to a new area. We believe that the use of GDSS in the healthcare arena will allow professionals to achieve better results in the analysis of one’s Electronically Clinical Profile (ECP). This attainment is vital, regarding the incoming to the market of new drugs and medical practices, which compete in the use of limited resources.
Resumo:
In a world increasingly conscientious about environmental effects, power and energy systems are undergoing huge transformations. Electric energy produced from power plants is transmitted and distributed to end users through a power grid. The power industry performs the engineering design, installation, operation, and maintenance tasks to provide a high-quality, secure energy supply while accounting for its systems’ abilities to withstand uncertain events, such as weather-related outages. Competitive, deregulated electricity markets and new renewable energy sources, however, have further complicated this already complex infrastructure.Sustainable development has also been a challenge for power systems. Recently, there has been a signifi cant increase in the installation of distributed generations, mainly based on renewable resources such as wind and solar. Integrating these new generation systems leads to more complexity. Indeed, the number of generation sources greatly increases as the grid embraces numerous smaller and distributed resources. In addition, the inherent uncertainties of wind and solar energy lead to technical challenges such as forecasting, scheduling, operation, control, and risk management. In this special issue introductory article, we analyze the key areas in this field that can benefi t most from AI and intelligent systems now and in the future.We also identify new opportunities for cross-fertilization between power systems and energy markets and intelligent systems researchers.
Resumo:
Mestrado em Medicina Nuclear - Área de especialização: Tomografia por Emissão de Positrões.
Resumo:
This paper presents a distributed model predictive control (DMPC) for indoor thermal comfort that simultaneously optimizes the consumption of a limited shared energy resource. The control objective of each subsystem is to minimize the heating/cooling energy cost while maintaining the indoor temperature and used power inside bounds. In a distributed coordinated environment, the control uses multiple dynamically decoupled agents (one for each subsystem/house) aiming to achieve satisfaction of coupling constraints. According to the hourly power demand profile, each house assigns a priority level that indicates how much is willing to bid in auction for consume the limited clean resource. This procedure allows the bidding value vary hourly and consequently, the agents order to access to the clean energy also varies. Despite of power constraints, all houses have also thermal comfort constraints that must be fulfilled. The system is simulated with several houses in a distributed environment.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
Processes are a central entity in enterprise collaboration. Collaborative processes need to be executed and coordinated in a distributed Computational platform where computers are connected through heterogeneous networks and systems. Life cycle management of such collaborative processes requires a framework able to handle their diversity based on different computational and communication requirements. This paper proposes a rational for such framework, points out key requirements and proposes it strategy for a supporting technological infrastructure. Beyond the portability of collaborative process definitions among different technological bindings, a framework to handle different life cycle phases of those definitions is presented and discussed. (c) 2007 Elsevier Ltd. All rights reserved.