917 resultados para power engineering computing
Resumo:
Since privatisation, maintenance of DNO LV feeder maximum demand information has gradually demised in some Utility Areas, and it is postulated that lack of knowledge about 11kV and LV electrical networks is resulting in a less economical and energy efficient Network as a whole. In an attempt to quantify the negative impact, this paper examines ten postulated new connection scenarios for a set of real LV load readings, in order to find the difference in design solutions when LV load readings were and were not known. The load profiles of the substations were examined in order to explore the utilisation profile. It was found that in 70% of the scenarios explored, significant cost differences were found. These cost differences varied by an average of 1000%, between schemes designed with and without load readings. Obviously, over designing a system and therefore operating more, underutilised transformers becomes less financially beneficial and less energy efficient. The paper concludes that new connection design is improved in terms of cost when carried out based on known LV load information and enhances the case for regular maximum feeder demand information and/or metering of LV feeders. © 2013 IEEE.
Resumo:
This paper reports work of a MEng student final year project, which looks in detail at the impacts that distributed generation can have on existing low-voltage distribution network protection systems. After a review of up-to-date protection issues, this paper will investigate several key issues that face distributed generation connections when it comes to network protection systems. These issues include, the blinding of protection systems, failure to automatically reclose, unintentional islanding, loss of mains power and the false tripping of feeders. Each of these problems impacts on protection systems in its own way. This study aims to review and investigate these problems via simulation demonstrations on one representative network to recommend solutions to practices.
Resumo:
Florida International University Commencement Ceremony May 1,2012 at US Century Bank Arena ( Session 1) Colleges graduated: College of Education College of Engineering &Computing
Resumo:
Florida International University Commencement Ceremony December 12,2011 at US Century Bank Arena ( Session 1) Colleges graduated: College of Education College of Engineering &Computing
Resumo:
Preprint
Resumo:
The purpose of this work in progress study was to test the concept of recognising plants using images acquired by image sensors in a controlled noise-free environment. The presence of vegetation on railway trackbeds and embankments presents potential problems. Woody plants (e.g. Scots pine, Norway spruce and birch) often establish themselves on railway trackbeds. This may cause problems because legal herbicides are not effective in controlling them; this is particularly the case for conifers. Thus, if maintenance administrators knew the spatial position of plants along the railway system, it may be feasible to mechanically harvest them. Primary data were collected outdoors comprising around 700 leaves and conifer seedlings from 11 species. These were then photographed in a laboratory environment. In order to classify the species in the acquired image set, a machine learning approach known as Bag-of-Features (BoF) was chosen. Irrespective of the chosen type of feature extraction and classifier, the ability to classify a previously unseen plant correctly was greater than 85%. The maintenance planning of vegetation control could be improved if plants were recognised and localised. It may be feasible to mechanically harvest them (in particular, woody plants). In addition, listed endangered species growing on the trackbeds can be avoided. Both cases are likely to reduce the amount of herbicides, which often is in the interest of public opinion. Bearing in mind that natural objects like plants are often more heterogeneous within their own class rather than outside it, the results do indeed present a stable classification performance, which is a sound prerequisite in order to later take the next step to include a natural background. Where relevant, species can also be listed under the Endangered Species Act.
Resumo:
Grid computing is an emerging technology for providing the high performance computing capability and collaboration mechanism for solving the collaborated and complex problems while using the existing resources. In this paper, a grid computing based framework is proposed for the probabilistic based power system reliability and security analysis. The suggested name of this computing grid is Reliability and Security Grid (RSA-Grid). Then the architecture of this grid is presented. A prototype system has been built for further development of grid-based services for power systems reliability and security assessment based on probabilistic techniques, which require high performance computing and large amount of memory. Preliminary results based on prototype of this grid show that RSA-Grid can provide the comprehensive assessment results for real power systems efficiently and economically.
Resumo:
Modern compute systems continue to evolve towards increasingly complex, heterogeneous and distributed architectures. At the same time, functionality and performance are no longer the only aspects when developing applications for such systems, and additional concerns such as flexibility, power efficiency, resource usage, reliability and cost are becoming increasingly important. This does not only raise the question of how to efficiently develop applications for such systems, but also how to cope with dynamic changes in the application behaviour or the system environment. The EPiCS Project aims to address these aspects through exploring self-awareness and self-expression. Self-awareness allows systems and applications to gather and maintain information about their current state and environment, and reason about their behaviour. Self-expression enables systems to adapt their behaviour autonomously to changing conditions. Innovations in EPiCS are based on systematic integration of research in concepts and foundations, customisable hardware/software platforms and operating systems, and self-aware networking and middleware infrastructure. The developed technologies are validated in three application domains: computational finance, distributed smart cameras and interactive mobile media systems. © 2012 IEEE.
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
Thesis submitted in the fulfilment of the requirements for the Degree of Master in Electronic and Telecomunications Engineering
Resumo:
WorldFIP is standardised as European Norm EN 50170 - General Purpose Field Communication System. Field communication systems (fieldbuses) started to be widely used as the communication support for distributed computer-controlled systems (DCCS), and are being used in all sorts of process control and manufacturing applications within different types of industries. There are several advantages in using fieldbuses as a replacement of for the traditional point-to-point links between sensors/actuators and computer-based control systems. Indeed they concern economical ones (cable savings) but, importantly, fieldbuses allow an increased decentralisation and distribution of the processing power over the field. Typically DCCS have real-time requirements that must be fulfilled. By this, we mean that process data must be transferred between network computing nodes within a maximum admissible time span. WorldFIP has very interesting mechanisms to schedule data transfers. It explicit distinguishes to types of traffic: periodic and aperiodic. In this paper we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis for guaranteeing the real-time requirements of both types of traffic. A major contribution is made in the analysis of worst-case response time of aperiodic transfer requests.
Resumo:
The alignment between competences, teaching-learning methodologies and assessment is a key element of the European Higher Education Area. This paper presents the efforts carried out by six Telematics, Computer Science and Electronic Engineering Education teachers towards achieving this alignment in their subjects. In a joint work with pedagogues, a set of recommended actions were identified. A selection of these actions were applied and evaluated in the six subjects. The cross-analysis of the results indicate that the actions allow students to better understand the methodologies and assessment planned for the subjects, facilitate (self-) regulation and increase students’ involvement in the subjects.