906 resultados para distributed network protocol (DNP3)
Resumo:
It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.
Resumo:
We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.
Distributed and compressed MIKEY mode to secure end-to-end communications in the Internet of things.
Resumo:
Multimedia Internet KEYing protocol (MIKEY) aims at establishing secure credentials between two communicating entities. However, existing MIKEY modes fail to meet the requirements of low-power and low-processing devices. To address this issue, we combine two previously proposed approaches to introduce a new distributed and compressed MIKEY mode for the Internet of Things. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the preshared mode is used in the constrained part of network, while the public key mode is used in the unconstrained part of the network. Furthermore, to mitigate the communication cost we introduce a new header compression scheme that reduces the size of MIKEY’s header from 12 Bytes to 3 Bytes in the best compression case. Preliminary results show that our proposed mode is energy preserving whereas its security properties are preserved untouched.
Resumo:
The explosion in mobile data traffic is a driver for future network operator technologies, given its large potential to affect both network performance and generated revenue. The concept of distributed mobility management (DMM) has emerged in order to overcome efficiency-wise limitations in centralized mobility approaches, proposing not only the distribution of anchoring functions but also dynamic mobility activation sensitive to the applications needs. Nevertheless, there is not an acceptable solution for IP multicast in DMM environments, as the first proposals based on MLD Proxy are prone to tunnel replication problem or service disruption. We propose the application of PIM-SM in mobility entities as an alternative solution for multicast support in DMM, and introduce an architecture enabling mobile multicast listeners support over distributed anchoring frameworks in a network-efficient way. The architecture aims at providing operators with flexible options to provide multicast mobility, supporting three modes: the first one introduces basic IP multicast support in DMM; the second improves subscription time through extensions to the mobility protocol, obliterating the dependence on MLD protocol; and the third enables fast listener mobility by avoiding potentially slow multicast tree convergence latency in larger infrastructures, by benefiting from mobility tunnels. The different modes were evaluated by mathematical analysis regarding disruption time and packet loss during handoff against several parameters, total and tunneling packet delivery cost, and regarding packet and signaling overhead.
Resumo:
Long-term monitoring of data of ambient mercury (Hg) on a global scale to assess its emission, transport, atmospheric chemistry, and deposition processes is vital to understanding the impact of Hg pollution on the environment. The Global Mercury Observation System (GMOS) project was funded by the European Commission (http://www.gmos.eu) and started in November 2010 with the overall goal to develop a coordinated global observing system to monitor Hg on a global scale, including a large network of ground-based monitoring stations, ad hoc periodic oceanographic cruises and measurement flights in the lower and upper troposphere as well as in the lower stratosphere. To date, more than 40 ground-based monitoring sites constitute the global network covering many regions where little to no observational data were available before GMOS. This work presents atmospheric Hg concentrations recorded worldwide in the framework of the GMOS project (2010–2015), analyzing Hg measurement results in terms of temporal trends, seasonality and comparability within the network. Major findings highlighted in this paper include a clear gradient of Hg concentrations between the Northern and Southern hemispheres, confirming that the gradient observed is mostly driven by local and regional sources, which can be anthropogenic, natural or a combination of both.
Resumo:
Background: Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings: In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion: The present results support these claims and the neural efficiency hypothesis.
Resumo:
Wireless Sensor Networks (WSNs) have a vast field of applications, including deployment in hostile environments. Thus, the adoption of security mechanisms is fundamental. However, the extremely constrained nature of sensors and the potentially dynamic behavior of WSNs hinder the use of key management mechanisms commonly applied in modern networks. For this reason, many lightweight key management solutions have been proposed to overcome these constraints. In this paper, we review the state of the art of these solutions and evaluate them based on metrics adequate for WSNs. We focus on pre-distribution schemes well-adapted for homogeneous networks (since this is a more general network organization), thus identifying generic features that can improve some of these metrics. We also discuss some challenges in the area and future research directions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Second-order phase locked loops (PLLs) are devices that are able to provide synchronization between the nodes in a network even under severe quality restrictions in the signal propagation. Consequently, they are widely used in telecommunication and control. Conventional master-slave (M-S) clock-distribution systems are being, replaced by mutually connected (MC) ones due to their good potential to be used in new types of application such as wireless sensor networks, distributed computation and communication systems. Here, by using an analytical reasoning, a nonlinear algebraic system of equations is proposed to establish the existence conditions for the synchronous state in an MC PLL network. Numerical experiments confirm the analytical results and provide ideas about how the network parameters affect the reachability of the synchronous state. The phase-difference oscillation amplitudes are related to the node parameters helping to design PLL neural networks. Furthermore, estimation of the acquisition time depending on the node parameters allows the performance evaluation of time distribution systems and neural networks based on phase-locked techniques. (c) 2008 Elsevier GmbH. All rights reserved.
Resumo:
Objectives. To evaluate the influence of different tertiary amines on degree of conversion (DC), shrinkage-strain, shrinkage-strain rate, Knoop microhardness, and color and transmittance stabilities of experimental resins containing BisGMA/TEGDMA (3: 1 wt), 0.25wt% camphorquinone, 1wt% amine (DMAEMA, CEMA, DMPT, DEPT or DABE). Different light-curing protocols were also evaluated. Methods. DC was evaluated with FTIR-ATR and shrinkage-strain with the bonded-disk method. Shrinkage-strain-rate data were obtained from numerical differentiation of shrinkage-strain data with respect to time. Color stability and transmittance were evaluated after different periods of artificial aging, according to ISO 7491: 2000. Results were evaluated with ANOVA, Tukey, and Dunnett`s T3 tests (alpha = 0.05). Results. Studied properties were influenced by amines. DC and shrinkage-strain were maximum at the sequence: CQ < DEPT < DMPT <= CEMA approximate to DABE < DMAEMA. Both DC and shrinkage were also influenced by the curing protocol, with positive correlations between DC and shrinkage-strain and DC and shrinkage-strain rate. Materials generally decreased in L* and increased in b*. The strong exception was the resin containing DMAEMA that did not show dark and yellow shifts. Color varied in the sequence: DMAEMA < DEPT < DMPT < CEMA < DABE. Transmittance varied in the sequence: DEPT approximate to DABE < DABE approximate to DMPT approximate to CEMA < DMPT approximate to CEMA approximate to DMAEMA, being more evident at the wavelength of 400 nm. No correlations between DC and optical properties were observed. Significance. The resin containing DMAEMA showed higher DC, shrinkage-strain, shrinkage-strain rate, and microhardness, in addition to better optical properties. (C) 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
In recent years, power systems have experienced many changes in their paradigm. The introduction of new players in the management of distributed generation leads to the decentralization of control and decision-making, so that each player is able to play in the market environment. In the new context, it will be very relevant that aggregator players allow midsize, small and micro players to act in a competitive environment. In order to achieve their objectives, virtual power players and single players are required to optimize their energy resource management process. To achieve this, it is essential to have financial resources capable of providing access to appropriate decision support tools. As small players have difficulties in having access to such tools, it is necessary that these players can benefit from alternative methodologies to support their decisions. This paper presents a methodology, based on Artificial Neural Networks (ANN), and intended to support smaller players. In this case the present methodology uses a training set that is created using energy resource scheduling solutions obtained using a mixed-integer linear programming (MIP) approach as the reference optimization methodology. The trained network is used to obtain locational marginal prices in a distribution network. The main goal of the paper is to verify the accuracy of the ANN based approach. Moreover, the use of a single ANN is compared with the use of two or more ANN to forecast the locational marginal price.
Resumo:
Smart Grids (SGs) appeared as the new paradigm for power system management and operation, being designed to integrate large amounts of distributed energy resources. This new paradigm requires a more efficient Energy Resource Management (ERM) and, simultaneously, makes this a more complex problem, due to the intensive use of distributed energy resources (DER), such as distributed generation, active consumers with demand response contracts, and storage units. This paper presents a methodology to address the energy resource scheduling, considering an intensive use of distributed generation and demand response contracts. A case study of a 30 kV real distribution network, including a substation with 6 feeders and 937 buses, is used to demonstrate the effectiveness of the proposed methodology. This network is managed by six virtual power players (VPP) with capability to manage the DER and the distribution network.
Resumo:
In competitive electricity markets with deep concerns at the efficiency level, demand response programs gain considerable significance. In the same way, distributed generation has gained increasing importance in the operation and planning of power systems. Grid operators and utilities are taking new initiatives, recognizing the value of demand response and of distributed generation for grid reliability and for the enhancement of organized spot market´s efficiency. Grid operators and utilities become able to act in both energy and reserve components of electricity markets. This paper proposes a methodology for a joint dispatch of demand response and distributed generation to provide energy and reserve by a virtual power player that operates a distribution network. The proposed method has been computationally implemented and its application is illustrated in this paper using a 32 bus distribution network with 32 medium voltage consumers.
Resumo:
In smart grids context, the distributed generation units based in renewable resources, play an important rule. The photovoltaic solar units are a technology in evolution and their prices decrease significantly in recent years due to the high penetration of this technology in the low voltage and medium voltage networks supported by governmental policies and incentives. This paper proposes a methodology to determine the maximum penetration of photovoltaic units in a distribution network. The paper presents a case study, with four different scenarios, that considers a 32-bus medium voltage distribution network and the inclusion storage units.
Resumo:
Energy resource scheduling becomes increasingly important, as the use of distributed resources is intensified and massive gridable vehicle use is envisaged. The present paper proposes a methodology for dayahead energy resource scheduling for smart grids considering the intensive use of distributed generation and of gridable vehicles, usually referred as Vehicle- o-Grid (V2G). This method considers that the energy resources are managed by a Virtual Power Player (VPP) which established contracts with V2G owners. It takes into account these contracts, the user´s requirements subjected to the VPP, and several discharge price steps. Full AC power flow calculation included in the model allows taking into account network constraints. The influence of the successive day requirements on the day-ahead optimal solution is discussed and considered in the proposed model. A case study with a 33 bus distribution network and V2G is used to illustrate the good performance of the proposed method.
Resumo:
The large increase of distributed energy resources, including distributed generation, storage systems and demand response, especially in distribution networks, makes the management of the available resources a more complex and crucial process. With wind based generation gaining relevance, in terms of the generation mix, the fact that wind forecasting accuracy rapidly drops with the increase of the forecast anticipation time requires to undertake short-term and very short-term re-scheduling so the final implemented solution enables the lowest possible operation costs. This paper proposes a methodology for energy resource scheduling in smart grids, considering day ahead, hour ahead and five minutes ahead scheduling. The short-term scheduling, undertaken five minutes ahead, takes advantage of the high accuracy of the very-short term wind forecasting providing the user with more efficient scheduling solutions. The proposed method uses a Genetic Algorithm based approach for optimization that is able to cope with the hard execution time constraint of short-term scheduling. Realistic power system simulation, based on PSCAD , is used to validate the obtained solutions. The paper includes a case study with a 33 bus distribution network with high penetration of distributed energy resources implemented in PSCAD .