893 resultados para Distributed Virtual Environments
Resumo:
Demand response concept has been gaining increasing importance while the success of several recent implementations makes this resource benefits unquestionable. This happens in a power systems operation environment that also considers an intensive use of distributed generation. However, more adequate approaches and models are needed in order to address the small size consumers and producers aggregation, while taking into account these resources goals. The present paper focuses on the demand response programs and distributed generation resources management by a Virtual Power Player that optimally aims to minimize its operation costs taking the consumption shifting constraints into account. The impact of the consumption shifting in the distributed generation resources schedule is also considered. The methodology is applied to three scenarios based on 218 consumers and 4 types of distributed generation, in a time frame of 96 periods.
Resumo:
Energy resource scheduling is becoming increasingly important, such as the use of more distributed generators and electric vehicles connected to the distribution network. This paper proposes a methodology to be used by Virtual Power Players (VPPs), regarding the energy resource scheduling in smart grids and considering day-ahead, hour-ahead and realtime time horizons. This method considers that energy resources are managed by a VPP which establishes contracts with their owners. The full AC power flow calculation included in the model takes into account network constraints. In this paper, distribution function errors are used to simulate variations between time horizons, and to measure the performance of the proposed methodology. A 33-bus distribution network with large number of distributed resources is used.
Resumo:
The implementation of competitive electricity markets has changed the consumers’ and distributed generation position power systems operation. The use of distributed generation and the participation in demand response programs, namely in smart grids, bring several advantages for consumers, aggregators, and system operators. The present paper proposes a remuneration structure for aggregated distributed generation and demand response resources. A virtual power player aggregates all the resources. The resources are aggregated in a certain number of clusters, each one corresponding to a distinct tariff group, according to the economic impact of the resulting remuneration tariff. The determined tariffs are intended to be used for several months. The aggregator can define the periodicity of the tariffs definition. The case study in this paper includes 218 consumers, and 66 distributed generation units.
Resumo:
The Smart Grid environment allows the integration of resources of small and medium players through the use of Demand Response programs. Despite the clear advantages for the grid, the integration of consumers must be carefully done. This paper proposes a system which simulates small and medium players. The system is essential to produce tests and studies about the active participation of small and medium players in the Smart Grid environment. When comparing to similar systems, the advantages comprise the capability to deal with three types of loads – virtual, contextual and real. It can have several loads optimization modules and it can run in real time. The use of modules and the dynamic configuration of the player results in a system which can represent different players in an easy and independent way. This paper describes the system and all its capabilities.
Resumo:
Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the present paper is proposes a methodology focusing on the aggregation of consumers and generators, providing adequate tolls for the demand response program’s adoption by evolved players. The methodology proposed in the present paper focuses on a Virtual Power Player that manages and aggregates the available demand response and distributed generation resources in order to satisfy the required electrical energy demand and reserve. The aggregation of resources is addressed by the use of clustering algorithms, and operation costs for the VPP are minimized. The presented case study is based on a set of 32 consumers and 66 distributed generation units, running on 180 distinct operation scenarios.
Resumo:
The vision of the Internet of Things (IoT) includes large and dense deployment of interconnected smart sensing and monitoring devices. This vast deployment necessitates collection and processing of large volume of measurement data. However, collecting all the measured data from individual devices on such a scale may be impractical and time consuming. Moreover, processing these measurements requires complex algorithms to extract useful information. Thus, it becomes imperative to devise distributed information processing mechanisms that identify application-specific features in a timely manner and with a low overhead. In this article, we present a feature extraction mechanism for dense networks that takes advantage of dominance-based medium access control (MAC) protocols to (i) efficiently obtain global extrema of the sensed quantities, (ii) extract local extrema, and (iii) detect the boundaries of events, by using simple transforms that nodes employ on their local data. We extend our results for a large dense network with multiple broadcast domains (MBD). We discuss and compare two approaches for addressing the challenges with MBD and we show through extensive evaluations that our proposed distributed MBD approach is fast and efficient at retrieving the most valuable measurements, independent of the number sensor nodes in the network.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
A liberalização dos mercados de energia elétrica e a crescente integração dos recursos energéticos distribuídos nas redes de distribuição, nomeadamente as unidades de produção distribuída, os sistemas de controlo de cargas através dos programas de demand response, os sistemas de armazenamento e os veículos elétricos, representaram uma evolução no paradigma de operação e gestão dos sistemas elétricos. Este novo paradigma de operação impõe o desenvolvimento de novas metodologias de gestão e controlo que permitam a integração de todas as novas tecnologias de forma eficiente e sustentável. O principal contributo deste trabalho reside no desenvolvimento de metodologias para a gestão de recursos energéticos no contexto de redes inteligentes, que contemplam três horizontes temporais distintos (24 horas, 1 hora e 5 minutos). As metodologias consideram os escalonamentos anteriores assim como as previsões atualizadas de forma a melhorar o desempenho total do sistema e consequentemente aumentar a rentabilidade dos agentes agregadores. As metodologias propostas foram integradas numa ferramenta de simulação, que servirá de apoio à decisão de uma entidade agregadora designada por virtual power player. Ao nível das metodologias desenvolvidas são propostos três algoritmos de gestão distintos, nomeadamente para a segunda (1 hora) e terceira fase (5 minutos) da ferramenta de gestão, diferenciados pela influência que os períodos antecedentes e seguintes têm no período em escalonamento. Outro aspeto relevante apresentado neste documento é o teste e a validação dos modelos propostos numa plataforma de simulação comercial. Para além das metodologias propostas, a aplicação permitiu validar os modelos dos equipamentos considerados, nomeadamente, ao nível das redes de distribuição e dos recursos energéticos distribuidos. Nesta dissertação são apresentados três casos de estudos, cada um com diferentes cenários referentes a cenários de operação futuros. Estes casos de estudos são importantes para verificar a viabilidade da implementação das metodologias e algoritmos propostos. Adicionalmente são apresentadas comparações das metodologias propostas relativamente aos resultados obtidos, complexidade de gestão em ambiente de simulação para as diferentes fases da ferramenta proposta e os benefícios e inconvenientes no uso da ferramenta proposta.
Resumo:
Human activity is very dynamic and subtle, and most physical environments are also highly dynamic and support a vast range of social practices that do not map directly into any immediate ubiquitous computing functionally. Identifying what is valuable to people is very hard and obviously leads to great uncertainty regarding the type of support needed and the type of resources needed to create such support. We have addressed the issues of system development through the adoption of a Crowdsourced software development model [13]. We have designed and developed Anywhere places, an open and flexible system support infrastructure for Ubiquitous Computing that is based on a balanced combination between global services and applications and situated devices. Evaluation, however, is still an open problem. The characteristics of ubiquitous computing environments make their evaluation very complex: there are no globally accepted metrics and it is very difficult to evaluate large-scale and long-term environments in real contexts. In this paper, we describe a first proposal of an hybrid 3D simulated prototype of Anywhere places that combines simulated and real components to generate a mixed reality which can be used to assess the envisaged ubiquitous computing environments [17].
Resumo:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto.
Resumo:
Despite its wide range and abundance on certain habitats, the crab-eating raccoon Procyon cancrivorus (G. Cuvier, 1798) is considered one of the less known Neotropical carnivore species. In the present study we analyzed the diet of P. cancrivorus in a peat forest and in an estuarine island in southernmost Brazil. Fruits of the gerivá palm tree Syagrus romanzoffiana were the most consumed item in the peat forest, followed by insects and mollusks. Small mammals, followed by Bromelia antiacantha (Bromeliaceae) fruits and brachyuran crustaceans were the most frequent items in the estuarine island. Other items found in lower frequencies were Solanum sp., Psidium sp., Smilax sp. and Dyospiros sp. fruits, diplopods, scorpions, fishes, anuran amphibians, reptiles (black tegu lizard and snakes), birds and medium-sized mammals (white-eared opossum, armadillo and coypu). Levin’s index values (peat forest: 0.38; estuarine island: 0.45) indicate an approximation to a median position between a specialist and a well distributed diet. Pianka’s index (0.80) showed a considerable diet similarity between the two systems. Procyon cancrivorus presented a varied diet in the studied areas and may play an important role as seed disperser on coastal environments in southernmost Brazil.
Resumo:
In the literature on risk, one generally assume that uncertainty is uniformly distributed over the entire working horizon, when the absolute risk-aversion index is negative and constant. From this perspective, the risk is totally exogenous, and thus independent of endogenous risks. The classic procedure is "myopic" with regard to potential changes in the future behavior of the agent due to inherent random fluctuations of the system. The agent's attitude to risk is rigid. Although often criticized, the most widely used hypothesis for the analysis of economic behavior is risk-neutrality. This borderline case must be envisaged with prudence in a dynamic stochastic context. The traditional measures of risk-aversion are generally too weak for making comparisons between risky situations, given the dynamic �complexity of the environment. This can be highlighted in concrete problems in finance and insurance, context for which the Arrow-Pratt measures (in the small) give ambiguous.
Resumo:
Amb l'evolució de la tecnologia les capacitats de còmput es van incrementant i problemes irresolubles del passat deixen de ser-ho amb els recursos actuals. La majoria d'aplicacions que s'enfronten a aquests problemes són complexes, ja que per aconseguir taxes elevades de rendiment es fa necessari utilitzar el major nombre de recursos possibles, i això les dota d'una arquitectura inherentment distribuïda. Seguint la tendència de la comunitat investigadora, en aquest treball de recerca es proposa una arquitectura per a entorns grids basada en la virtualització de recursos que possibilita la gestió eficient d'aquests recursos. L'experimentació duta a terme ha permès comprovar la viabilitat d'aquesta arquitectura i la millora en la gestió que la utilització de màquines virtuals proporciona.
Resumo:
Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user’s job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic.
Resumo:
En aquest TFC, un estudiant amb coneixements de Java, però sense experiència prèvia en aplicacions distribuïdes, dissenya i implementa un exemple típic d'una aplicació distribuïda que consisteix en una aplicació de comerç electrònic utilitzant la tecnologia J2EE i amb el seguiment de patrons de disseny.