4 resultados para load curve
em DigitalCommons@University of Nebraska - Lincoln
Resumo:
The security of the two party Diffie-Hellman key exchange protocol is currently based on the discrete logarithm problem (DLP). However, it can also be built upon the elliptic curve discrete logarithm problem (ECDLP). Most proposed secure group communication schemes employ the DLP-based Diffie-Hellman protocol. This paper proposes the ECDLP-based Diffie-Hellman protocols for secure group communication and evaluates their performance on wireless ad hoc networks. The proposed schemes are compared at the same security level with DLP-based group protocols under different channel conditions. Our experiments and analysis show that the Tree-based Group Elliptic Curve Diffie-Hellman (TGECDH) protocol is the best in overall performance for secure group communication among the four schemes discussed in the paper. Low communication overhead, relatively low computation load and short packets are the main reasons for the good performance of the TGECDH protocol.
Resumo:
The next-generation SONET metro network is evolving into a service-rich infrastructure. At the edge of such a network, multi-service provisioning platforms (MSPPs) provide efficient data mapping enabled by Generic Framing Procedure (GFP) and Virtual Concatenation (VC). The core of the network tends to be a meshed architecture equipped with Multi-Service Switches (MSSs). In the context of these emerging technologies, we propose a load-balancing spare capacity reallocation approach to improve network utilization in the next-generation SONET metro networks. Using our approach, carriers can postpone network upgrades, resulting in increased revenue with reduced capital expenditures (CAPEX). For the first time, we consider the spare capacity reallocation problem from a capacity upgrade and network planning perspective. Our approach can operate in the context of shared-path protection (with backup multiplexing) because it reallocates spare capacity without disrupting working services. Unlike previous spare capacity reallocation approaches which aim at minimizing total spare capacity, our load-balancing approach minimizes the network load vector (NLV), which is a novel metric that reflects the network load distribution. Because NLV takes into consideration both uniform and non-uniform link capacity distribution, our approach can benefit both uniform and non-uniform networks. We develop a greedy loadbalancing spare capacity reallocation (GLB-SCR) heuristic algorithm to implement this approach. Our experimental results show that GLB-SCR outperforms a previously proposed algorithm (SSR) in terms of established connection capacity and total network capacity in both uniform and non-uniform networks.
Resumo:
Wavelength-routed networks (WRN) are very promising candidates for next-generation Internet and telecommunication backbones. In such a network, optical-layer protection is of paramount importance due to the risk of losing large amounts of data under a failure. To protect the network against this risk, service providers usually provide a pair of risk-independent working and protection paths for each optical connection. However, the investment made for the optical-layer protection increases network cost. To reduce the capital expenditure, service providers need to efficiently utilize their network resources. Among all the existing approaches, shared-path protection has proven to be practical and cost-efficient [1]. In shared-path protection, several protection paths can share a wavelength on a fiber link if their working paths are risk-independent. In real-world networks, provisioning is usually implemented without the knowledge of future network resource utilization status. As the network changes with the addition and deletion of connections, the network utilization will become sub-optimal. Reconfiguration, which is referred to as the method of re-provisioning the existing connections, is an attractive solution to fill in the gap between the current network utilization and its optimal value [2]. In this paper, we propose a new shared-protection-path reconfiguration approach. Unlike some of previous reconfiguration approaches that alter the working paths, our approach only changes protection paths, and hence does not interfere with the ongoing services on the working paths, and is therefore risk-free. Previous studies have verified the benefits arising from the reconfiguration of existing connections [2] [3] [4]. Most of them are aimed at minimizing the total used wavelength-links or ports. However, this objective does not directly relate to cost saving because minimizing the total network resource consumption does not necessarily maximize the capability of accommodating future connections. As a result, service providers may still need to pay for early network upgrades. Alternatively, our proposed shared-protection-path reconfiguration approach is based on a load-balancing objective, which minimizes the network load distribution vector (LDV, see Section 2). This new objective is designed to postpone network upgrades, thus bringing extra cost savings to service providers. In other words, by using the new objective, service providers can establish as many connections as possible before network upgrades, resulting in increased revenue. We develop a heuristic load-balancing (LB) reconfiguration approach based on this new objective and compare its performance with an approach previously introduced in [2] and [4], whose objective is minimizing the total network resource consumption.
Resumo:
In active learning, a machine learning algorithmis given an unlabeled set of examples U, and is allowed to request labels for a relatively small subset of U to use for training. The goal is then to judiciously choose which examples in U to have labeled in order to optimize some performance criterion, e.g. classification accuracy. We study how active learning affects AUC. We examine two existing algorithms from the literature and present our own active learning algorithms designed to maximize the AUC of the hypothesis. One of our algorithms was consistently the top performer, and Closest Sampling from the literature often came in second behind it. When good posterior probability estimates were available, our heuristics were by far the best.