976 resultados para Distributed resources
Resumo:
A supervisory control and data acquisition (SCADA) system is an integrated platform that incorporates several components and it has been applied in the field of power systems and several engineering applications to monitor, operate and control a lot of processes. In the future electrical networks, SCADA systems are essential for an intelligent management of resources like distributed generation and demand response, implemented in the smart grid context. This paper presents a SCADA system for a typical residential house. The application is implemented on MOVICON™11 software. The main objective is to manage the residential consumption, reducing or curtailing loads to keep the power consumption in or below a specified setpoint, imposed by the costumer and the generation availability.
Resumo:
Energy resources management can play a very relevant role in future power systems in a SmartGrid context, with intensive penetration of distributed generation and storage systems. This paper deals with the importance of resource management in incident situations. The paper presents DemSi, an energy resources management simulator that has been developed by the authors to simulate electrical distribution networks with high distributed generation penetration, storage in network points and customers with demand response contracts. DemSi is used to undertake simulations for an incident scenario, evidencing the advantages of adequately using flexible contracts, storage, and reserve in order to limit incident consequences.
Resumo:
Demand response can play a very relevant role in future power systems in which distributed generation can help to assure service continuity in some fault situations. This paper deals with the demand response concept and discusses its use in the context of competitive electricity markets and intensive use of distributed generation. The paper presents DemSi, a demand response simulator that allows studying demand response actions and schemes using a realistic network simulation based on PSCAD. Demand response opportunities are used in an optimized way considering flexible contracts between consumers and suppliers. A case study evidences the advantages of using flexible contracts and optimizing the available generation when there is a lack of supply.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
The objective of this descriptive study was to map mental health research in Brazil, providing an overview of infrastructure, financing and policies mental health research. As part of the Atlas-Research Project, a WHO initiative to map mental health research in selected low and middle-income countries, this study was carried out between 1998 and 2002. Data collection strategies included evaluation of governmental documents and sites and questionnaires sent to key professionals for providing information about the Brazilian mental health research infrastructure. In the year 2002, the total budget for Health Research was US$101 million, of which US$3.4 million (3.4) was available for Mental Health Research. The main funding sources for mental health research were found to be the São Paulo State Funding Agency (Fapesp, 53.2%) and the Ministry of Education (CAPES, 30.2%). The rate of doctors is 1.7 per 1,000 inhabitants, and the rate of psychiatrists is 2.7 per 100,000 inhabitants estimated 2000 census. In 2002, there were 53 postgraduate courses directed to mental health training in Brazil (43 in psychology, six in psychiatry, three in psychobiology and one in psychiatric nursing), with 1,775 students being trained in Brazil and 67 overseas. There were nine programs including psychiatry, neuropsychiatry, psychobiology and mental health, seven of them implemented in Southern states. During the five-year period, 186 students got a doctoral degree (37 per year) and 637 articles were published in Institute for Scientic Information (ISI)-indexed journals. The investment channeled towards postgraduate and human resource education programs, by means of grants and other forms of research support, has secured the country a modest but continuous insertion in the international knowledge production in the mental health area.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
As instituições particulares de solidariedade social (IPSS) são entidades constituídas por iniciativa de particulares e sem finalidade lucrativa com o propósito de dar expressão organizada ao dever moral de solidariedade e de justiça entre os indivíduos. Considerando as dificuldades económicas que Portugal atravessa estas instituições assumem um papel fundamental na sociedade de hoje, sendo o mesmo reconhecido por estado e clientes. O capital humano é o elemento central no que concerne aos ativos intangíveis e é formado pelas pessoas que integram a instituição. É essencial analisar a gestão dos recursos humanos das IPSS tendo em conta que estes, alinhados com a direção, são parte fulcral para a instituição atingir os objetivos a que se propõe. Com este estudo pretendemos analisar as práticas de gestão de recursos humanos aplicadas pelas IPSS e para o conseguir utilizamos um questionário diagnóstico, distribuído a uma amostra da população, e analisamos as práticas de uma IPSS através de um estudo de caso. O estudo mostrou que as IPSS aplicam maioritariamente a gestão administrativa de recursos humanos e que a regulamentação das instituições por parte da Segurança Social é um fator importante na tipologia de gestão aplicada. As conclusões baseiam-se na análise do estudo de caso e das respostas ao questionário, pelas IPSS da amostra, razão pela qual a generalização das conclusões deverá ser ponderada.
Resumo:
A great number of low-temperature geothermal fields occur in Northern-Portugal related to fractured rocks. The most important superficial manifestations of these hydrothermal systems appear in pull-apart tectonic basins and are strongly conditioned by the orientation of the main fault systems in the region. This work presents the interpretation of gravity gradient maps and 3D inversion model produced from a regional gravity survey. The horizontal gradients reveal a complex fault system. The obtained 3D model of density contrast puts into evidence the main fault zone in the region and the depth distribution of the granitic bodies. Their relationship with the hydrothermal systems supports the conceptual models elaborated from hydrochemical and isotopic water analyses. This work emphasizes the importance of the role of the gravity method and analysis to better understand the connection between hydrothermal systems and the fractured rock pattern and surrounding geology. (c) 2013 Elsevier B.V. All rights reserved.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
Dissertação de Mestrado, Ciências da Comunicação (Jornalismo), 22 de Janeiro de 2013, Universidade dos Açores.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
Processes are a central entity in enterprise collaboration. Collaborative processes need to be executed and coordinated in a distributed Computational platform where computers are connected through heterogeneous networks and systems. Life cycle management of such collaborative processes requires a framework able to handle their diversity based on different computational and communication requirements. This paper proposes a rational for such framework, points out key requirements and proposes it strategy for a supporting technological infrastructure. Beyond the portability of collaborative process definitions among different technological bindings, a framework to handle different life cycle phases of those definitions is presented and discussed. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Dissertação de Mestrado em Ambiente, Saúde e Segurança.