87 resultados para Embedded network
em Instituto Politécnico do Porto, Portugal
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Despite the steady increase in experimental deployments, most of research work on WSNs has focused only on communication protocols and algorithms, with a clear lack of effective, feasible and usable system architectures, integrated in a modular platform able to address both functional and non–functional requirements. In this paper, we outline EMMON [1], a full WSN-based system architecture for large–scale, dense and real–time embedded monitoring [3] applications. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. Then, EM-Set, the EMMON engineering toolset will be presented. EM-Set includes a network deployment planning, worst–case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset was crucial for the development of EMMON which was designed to use standard commercially available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. Finally, the EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ nodes testbed.
Resumo:
Demands for functionality enhancements, cost reductions and power savings clearly suggest the introduction of multiand many-core platforms in real-time embedded systems. However, when compared to uni-core platforms, the manycores experience additional problems, namely the lack of scalable coherence mechanisms and the necessity to perform migrations. These problems have to be addressed before such systems can be considered for integration into the realtime embedded domain. We have devised several agreement protocols which solve some of the aforementioned issues. The protocols allow the applications to plan and organise their future executions both temporally and spatially (i.e. when and where the next job will be executed). Decisions can be driven by several factors, e.g. load balancing, energy savings and thermal issues. All presented protocols are analytically described, with the particular emphasis on their respective real-time behaviours and worst-case performance. The underlying assumptions are based on the multi-kernel model and the message-passing paradigm, which constitutes the communication between the interacting instances.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system's performance. The paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The results show that the proposed heuristics achieve a reasonably higher system's availability than static offline decisions when lower replication ratios are imposed due to resource or cost limitations. The paper introduces a novel approach to coordinate the activation of passive replicas in interdependent distributed environments. The proposed distributed coordination model reduces the complexity of the needed interactions among nodes and is faster to converge to a globally acceptable solution than a traditional centralised approach.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system’s performance. This paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The activation of passive replicas is coordinated through a fast convergence protocol that reduces the complexity of the needed interactions among nodes until a new collective global service solution is determined.
Resumo:
Value has been defined in different theoretical contexts as need, desire, interest, standard /criteria, beliefs, attitudes, and preferences. The creation of value is key to any business, and any business activity is about exchanging some tangible and/or intangible good or service and having its value accepted and rewarded by customers or clients, either inside the enterprise or collaborative network or outside. “Perhaps surprising then is that firms often do not know how to define value, or how to measure it” (Anderson and Narus, 1998 cited by [1]). Woodruff echoed that we need “richer customer value theory” for providing an “important tool for locking onto the critical things that managers need to know”. In addition, he emphasized, “we need customer value theory that delves deeply into customer’s world of product use in their situations” [2]. In this sense, we proposed and validated a novel “Conceptual Model for Decomposing the Value for the Customer”. To this end, we were aware that time has a direct impact on customer perceived value, and the suppliers’ and customers’ perceptions change from the pre-purchase to the post-purchase phases, causing some uncertainty and doubts.We wanted to break down value into all its components, as well as every built and used assets (both endogenous and/or exogenous perspectives). This component analysis was then transposed into a mathematical formulation using the Fuzzy Analytic Hierarchy Process (AHP), so that the uncertainty and vagueness of value perceptions could be embedded in this model that relates used and built assets in the tangible and intangible deliverable exchange among the involved parties, with their actual value perceptions.
Resumo:
In recent years, power systems have experienced many changes in their paradigm. The introduction of new players in the management of distributed generation leads to the decentralization of control and decision-making, so that each player is able to play in the market environment. In the new context, it will be very relevant that aggregator players allow midsize, small and micro players to act in a competitive environment. In order to achieve their objectives, virtual power players and single players are required to optimize their energy resource management process. To achieve this, it is essential to have financial resources capable of providing access to appropriate decision support tools. As small players have difficulties in having access to such tools, it is necessary that these players can benefit from alternative methodologies to support their decisions. This paper presents a methodology, based on Artificial Neural Networks (ANN), and intended to support smaller players. In this case the present methodology uses a training set that is created using energy resource scheduling solutions obtained using a mixed-integer linear programming (MIP) approach as the reference optimization methodology. The trained network is used to obtain locational marginal prices in a distribution network. The main goal of the paper is to verify the accuracy of the ANN based approach. Moreover, the use of a single ANN is compared with the use of two or more ANN to forecast the locational marginal price.
Resumo:
Smart Grids (SGs) appeared as the new paradigm for power system management and operation, being designed to integrate large amounts of distributed energy resources. This new paradigm requires a more efficient Energy Resource Management (ERM) and, simultaneously, makes this a more complex problem, due to the intensive use of distributed energy resources (DER), such as distributed generation, active consumers with demand response contracts, and storage units. This paper presents a methodology to address the energy resource scheduling, considering an intensive use of distributed generation and demand response contracts. A case study of a 30 kV real distribution network, including a substation with 6 feeders and 937 buses, is used to demonstrate the effectiveness of the proposed methodology. This network is managed by six virtual power players (VPP) with capability to manage the DER and the distribution network.
Resumo:
This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.
Resumo:
In competitive electricity markets with deep concerns for the efficiency level, demand response programs gain considerable significance. As demand response levels have decreased after the introduction of competition in the power industry, new approaches are required to take full advantage of demand response opportunities. Grid operators and utilities are taking new initiatives, recognizing the value of demand response for grid reliability and for the enhancement of organized spot markets’ efficiency. This paper proposes a methodology for the selection of the consumers that participate in an event, which is the responsibility of the Portuguese transmission network operator. The proposed method is intended to be applied in the interruptibility service implemented in Portugal, in convergence with Spain, in the context of the Iberian electricity market. This method is based on the calculation of locational marginal prices (LMP) which are used to support the decision concerning the consumers to be schedule for participation. The proposed method has been computationally implemented and its application is illustrated in this paper using a 937 bus distribution network with more than 20,000 consumers.
Resumo:
This paper presents an artificial neural network applied to the forecasting of electricity market prices, with the special feature of being dynamic. The dynamism is verified at two different levels. The first level is characterized as a re-training of the network in every iteration, so that the artificial neural network can able to consider the most recent data at all times, and constantly adapt itself to the most recent happenings. The second level considers the adaptation of the neural network’s execution time depending on the circumstances of its use. The execution time adaptation is performed through the automatic adjustment of the amount of data considered for training the network. This is an advantageous and indispensable feature for this neural network’s integration in ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to the market negotiating players of MASCEM (Multi-Agent Simulator of Competitive Electricity Markets).
Resumo:
In smart grids context, the distributed generation units based in renewable resources, play an important rule. The photovoltaic solar units are a technology in evolution and their prices decrease significantly in recent years due to the high penetration of this technology in the low voltage and medium voltage networks supported by governmental policies and incentives. This paper proposes a methodology to determine the maximum penetration of photovoltaic units in a distribution network. The paper presents a case study, with four different scenarios, that considers a 32-bus medium voltage distribution network and the inclusion storage units.
Resumo:
Energy resource scheduling becomes increasingly important, as the use of distributed resources is intensified and massive gridable vehicle use is envisaged. The present paper proposes a methodology for dayahead energy resource scheduling for smart grids considering the intensive use of distributed generation and of gridable vehicles, usually referred as Vehicle- o-Grid (V2G). This method considers that the energy resources are managed by a Virtual Power Player (VPP) which established contracts with V2G owners. It takes into account these contracts, the user´s requirements subjected to the VPP, and several discharge price steps. Full AC power flow calculation included in the model allows taking into account network constraints. The influence of the successive day requirements on the day-ahead optimal solution is discussed and considered in the proposed model. A case study with a 33 bus distribution network and V2G is used to illustrate the good performance of the proposed method.
Resumo:
Natural gas industry has been confronted with big challenges: great growth in demand, investments on new GSUs – gas supply units, and efficient technical system management. The right number of GSUs, their best location on networks and the optimal allocation to loads is a decision problem that can be formulated as a combinatorial programming problem, with the objective of minimizing system expenses. Our emphasis is on the formulation, interpretation and development of a solution algorithm that will analyze the trade-off between infrastructure investment expenditure and operating system costs. The location model was applied to a 12 node natural gas network, and its effectiveness was tested in five different operating scenarios.
Resumo:
This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.