990 resultados para communication infrastructure


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Arquitetura e Urbanismo

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent advances in processor speeds, mobile communications and battery life have enabled computers to evolve from completely wired to completely mobile. In the most extreme case, all nodes are mobile and communication takes place at available opportunities – using both traditional communication infrastructure as well as the mobility of intermediate nodes. These are mobile opportunistic networks. Data communication in such networks is a difficult problem, because of the dynamic underlying topology, the scarcity of network resources and the lack of global information. Establishing end-to-end routes in such networks is usually not feasible. Instead a store-and-carry forwarding paradigm is better suited for such networks. This dissertation describes and analyzes algorithms for forwarding of messages in such networks. In order to design effective forwarding algorithms for mobile opportunistic networks, we start by first building an understanding of the set of all paths between nodes, which represent the available opportunities for any forwarding algorithm. Relying on real measurements, we enumerate paths between nodes and uncover what we refer to as the path explosion effect. The term path explosion refers to the fact that the number of paths between a randomly selected pair of nodes increases exponentially with time. We draw from the theory of epidemics to model and explain the path explosion effect. This is the first contribution of the thesis, and is a key observation that underlies subsequent results. Our second contribution is the study of forwarding algorithms. For this, we rely on trace driven simulations of different algorithms that span a range of design dimensions. We compare the performance (success rate and average delay) of these algorithms. We make the surprising observation that most algorithms we consider have roughly similar performance. We explain this result in light of the path explosion phenomenon. While the performance of most algorithms we studied was roughly the same, these algorithms differed in terms of cost. This prompted us to focus on designing algorithms with the explicit intent of reducing costs. For this, we cast the problem of forwarding as an optimal stopping problem. Our third main contribution is the design of strategies based on optimal stopping principles which we refer to as Delegation schemes. Our analysis shows that using a delegation scheme reduces cost over naive forwarding by a factor of O(√N), where N is the number of nodes in the network. We further validate this result on real traces, where the cost reduction observed is even greater. Our results so far include a key assumption, which is unbounded buffers on nodes. Next, we relax this assumption, so that the problem shifts to one of prioritization of messages for transmission and dropping. Our fourth contribution is the study of message prioritization schemes, combined with forwarding. Our main result is that one achieves higher performance by assigning higher priorities to young messages in the network. We again interpret this result in light of the path explosion effect.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in the Smart Grid has exposed them to a wide range of cyber-security issues, and there are a multitude of potential access points for cyber attackers. This paper presents a SCADA-specific cyber-security test-bed which contains SCADA software and communication infrastructure. This test-bed is used to investigate an Address Resolution Protocol (ARP) spoofing based man-in-the-middle attack. Finally, the paper proposes a future work plan which focuses on applying intrusion detection and prevention technology to address cyber-security issues in SCADA systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in Smart Grids potentially means greater susceptibility to malicious attackers. SCADA systems with legacy communication infrastructure have inherent cyber-security vulnerabilities as these systems were originally designed with little consideration of cyber threats. In order to improve cyber-security of SCADA networks, this paper presents a rule-based Intrusion Detection System (IDS) using a Deep Packet Inspection (DPI) method, which includes signature-based and model-based approaches tailored for SCADA systems. The proposed signature-based rules can accurately detect several known suspicious or malicious attacks. In addition, model-based detection is proposed as a complementary method to detect unknown attacks. Finally, proposed intrusion detection approaches for SCADA networks are implemented and verified using a ruled based method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in Smart Grids potentially means greater susceptibility to malicious attackers. SCADA systems with legacy communication infrastructure have inherent cyber-security vulnerabilities as these systems were originally designed with little consideration of cyber threats. In order to improve cyber-security of SCADA networks, this paper presents a rule-based Intrusion Detection System (IDS) using a Deep Packet Inspection (DPI) method, which includes signature-based and model-based approaches tailored for SCADA systems. The proposed signature-based rules can accurately detect several known suspicious or malicious attacks. In addition, model-based detection is proposed as a complementary method to detect unknown attacks. Finally, proposed intrusion detection approaches for SCADA networks are implemented and verified via Snort rules.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In distributed networks, it is often useful for the nodes to be aware of dense subgraphs, e.g., such a dense subgraph could reveal dense substructures in otherwise sparse graphs (e.g. the World Wide Web or social networks); these might reveal community clusters or dense regions for possibly maintaining good communication infrastructure. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs that the member nodes are aware of. The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that (2 + e)-approximate the densest subgraph and (3 + e)-approximate the at-least-k-densest subgraph (for a given parameter k). Our algorithms work for a wide range of parameter values and run in O(D log n) time. Further, a special case of our results also gives the first fully decentralized approximation algorithms for densest and at-least-k-densest subgraph problems for static distributed graphs. © 2012 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Distributed Computer-Controlled Systems (DCCS), a special emphasis must be given to the communication infrastructure, which must provide timely and reliable communication services. CAN networks are usually suitable to support small-scale DCCS. However, they are known to present some reliability problems, which can lead to an unreliable behaviour of the supported applications. In this paper, an atomic multicast protocol for CAN networks is proposed. This protocol explores the CAN synchronous properties, providing a timely and reliable service to the supported applications. The implementation of such protocol in Ada, on top of the Ada version of Real-Time Linux is presented, which is used to demonstrate the advantages and disadvantages of the platform to support reliable communications in DCCS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Angepasste Kommunikationssysteme für den effizienten Einsatz in dezentralen elektrischen Versorgungsstrukturen - In öffentlichen Elektrizitätsnetzen wird der Informationsaustausch seit längerem durch historisch gewachsene und angepasste Systeme erfolgreich bewerkstelligt. Basierend auf einem weiten Erfahrungsspektrum und einer gut ausgebauten Kommunikationsinfrastruktur stellt die informationstechnische Anbindung eines Teilnehmers im öffentlichen Versorgungsnetz primär kein Hemmnis dar. Anders gestaltet sich dagegen die Situation in dezentralen Versorgungsstrukturen. Da die Elektrifizierung von dezentralen Versorgungsgebieten, mittels der Vernetzung vieler verteilter Erzeugungsanlagen und des Aufbaus von nicht an das öffentliche Elektrizitätsnetz angeschlossenen Verteilnetzen (Minigrids), erst in den letzten Jahren an Popularität gewonnen hat, sind nur wenige Projekte bis dato abgeschlossen. Für die informationstechnische Anbindung von Teilnehmern in diesen Strukturen bedeutet dies, dass nur in einem sehr begrenzten Umfang auf Erfahrungswerte bei der Systemauswahl zurückgegriffen werden kann. Im Rahmen der Dissertation ist deshalb ein Entscheidungsfindungsprozess (Leitfaden für die Systemauswahl) entwickelt worden, der neben einem direkten Vergleich von Kommunikationssystemen basierend auf abgeleiteten Bewertungskriterien und Typen, der Reduktion des Vergleichs auf zwei Systemwerte (relativer Erwartungsnutzenzuwachs und Gesamtkostenzuwachs), die Wahl eines geeigneten Kommunikationssystems für die Applikation in dezentralen elektrischen Versorgungsstrukturen ermöglicht. In Anlehnung an die klassische Entscheidungstheorie werden mit der Berechnung eines Erwartungsnutzens je Kommunikationssystems, aus der Gesamtsumme der Einzelprodukte der Nutzwerte und der Gewichtungsfaktor je System, sowohl die technischen Parameter und applikationsspezifischen Aspekte, als auch die subjektiven Bewertungen zu einem Wert vereint. Mit der Ermittlung der jährlich erforderlichen Gesamtaufwendungen für ein Kommunikationssystem bzw. für die anvisierten Kommunikationsaufgaben, in Abhängigkeit der Applikation wird neben dem ermittelten Erwartungsnutzen des Systems, ein weiterer Entscheidungsparameter für die Systemauswahl bereitgestellt. Die anschließende Wahl geeigneter Bezugsgrößen erlaubt die Entscheidungsfindung bzgl. der zur Auswahl stehenden Systeme auf einen Vergleich mit einem Bezugssystem zurückzuführen. Hierbei sind nicht die absoluten Differenzen des Erwartungsnutzen bzw. des jährlichen Gesamtaufwandes von Interesse, sondern vielmehr wie sich das entsprechende System gegenüber dem Normal (Bezugssystem) darstellt. Das heißt, der relative Zuwachs des Erwartungsnutzen bzw. der Gesamtkosten eines jeden Systems ist die entscheidende Kenngröße für die Systemauswahl. Mit dem Eintrag der berechneten relativen Erwartungsnutzen- und Gesamtkostenzuwächse in eine neu entwickelte 4-Quadranten-Matrix kann unter Berücksichtigung der Lage der korrespondierenden Wertepaare eine einfache (grafische) Entscheidung bzgl. der Wahl des für die Applikation optimalsten Kommunikationssystems erfolgen. Eine exemplarisch durchgeführte Systemauswahl, basierend auf den Analyseergebnissen von Kommunikationssystemen für den Einsatz in dezentralen elektrischen Versorgungsstrukturen, veranschaulicht und verifiziert die Handhabung des entwickelten Konzeptes. Die abschließende Realisierung, Modifikation und Test des zuvor ausgewählten Distribution Line Carrier Systems unterstreicht des Weiteren die Effizienz des entwickelten Entscheidungsfindungsprozesses. Dem Entscheidungsträger für die Systemauswahl wird insgesamt ein Werkzeug zur Verfügung gestellt, das eine einfache und praktikable Entscheidungsfindung erlaubt. Mit dem entwickelten Konzept ist erstmals eine ganzheitliche Betrachtung unter Berücksichtigung sowohl der technischen und applikationsspezifischen, als auch der ökonomischen Aspekte und Randbedingungen möglich, wobei das Entscheidungsfindungskonzept nicht nur auf die Systemfindung für dezentrale elektrische Energieversorgungsstrukturen begrenzt ist, sondern auch bei entsprechender Modifikation der Anforderungen, Systemkenngrößen etc. auf andere Applikationsanwendungen übertragen werden.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La necesidad de preparar al sector de tecnologías de la información y las comunicaciones para hacer frente a los desafíos que trae consigo el desarrollo de la convergencia en todas sus dimensiones, implicaba plantear un nuevo equilibrio entre la promoción del desarrollo competitivo del sector y el cumplimiento de los compromisos sociales de cobertura derivados de la naturaleza de servicio público que ostentan las telecomunicaciones. En consecuencia, desde comienzos de 2007 se trabajó, con una permanente retroalimentación intragubernamental y sectorial, en la estructuración de los pilares del Proyecto de Ley 112/07 Cámara - 340/08 Senado, que culminó en la sanción presidencial de la Ley 1341 el30 de julio de 2009. Este nuevo marco legal para un sector en constante evolución constituye un hito sin precedentes, que rompe con la tradición de más de diez años y seis intentos fallidos de ajuste legislativo e institucional.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study considers the impact of the university service and learning environments (which we define as non educational factors) on student satisfaction among international postgraduate students from Asia studying in Australian universities. It is based on the expectations/perceptions paradigm and analyses the relationship between key variables and overall satisfaction of student groups in respect of their service and learning environments. The aim of this paper is to consider the importance of non-educational factors in international postgraduate university students, in particular, with regard to information and communication, infrastructure, and university recognition. The data used in this study is derived from a mail survey conducted among international postgraduate students from China, India, Indonesia and Thailand studying in five universities in Victoria. Structural Equation Modelling was used to understand the relationship between the constructs in this study. The results indicate that noneducation related factors are very important to international postgraduate students and they are predictors of overall satisfaction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Australia is in the midst of a massive transformation of its communication infrastructure. The AUD43 billion Australian National Broadband Network (NBN) to be set up by the wholly Federal government-owned NBNCo Limited (NBNCo), is the largest infrastructure project ever proposed in Australia (NBN, 2010). It has the capacity to combine features and technologies that were once separate, but now have converged, including computing, telephony, free-to-air (FTA) television, direct-to-home satellite broadcasting, radio and the internet. This means that current thinking about these media technologies, developed through the process of convergence as well as regulation, requires review. Future services for digital television are going to be more akin to app-based functions currently available on mobiles and tablets but accessed via the television screen rather than the PC. Against such a background, this article examines the Australian ‘televisual’ space, arguing that faster broadband and internet-enabled televisions for movies, shows, communication and more, when it suits the audience, are the keys to television’s survival through visually networked possibilities.