4 resultados para fault-tolerant
em DigitalCommons@University of Nebraska - Lincoln
Resumo:
Establishing a fault-tolerant connection in a network involves computation of diverse working and protection paths. The Shared Risk Link Group (SRLG) [1] concept is used to model several types of failure conditions such as link, node, fiber conduit, etc. In this work we focus on the problem of computing optimal SRLG/link diverse paths under shared protection. Shared protection technique improves network resource utilization by allowing protection paths of multiple connections to share resources. In this work we propose an iterative heuristic for computing SRLG/link diverse paths. We present a method to calculate a quantitative measure that provides a bounded guarantee on the optimality of the diverse paths computed by the heuristic. The experimental results on computing link diverse paths show that our proposed heuristic is efficient in terms of number of iterations required (time taken) to compute diverse paths when compared to other previously proposed heuristics.
Resumo:
One of the important issues in establishing a fault tolerant connection in a wavelength division multiplexing optical network is computing a pair of disjoint working and protection paths and a free wavelength along the paths. While most of the earlier research focused only on computing disjoint paths, in this work we consider computing both disjoint paths and a free wavelength along the paths. The concept of dependent cost structure (DCS) of protection paths to enhance their resource sharing ability was proposed in our earlier work. In this work we extend the concept of DCS of protection paths to wavelength continuous networks. We formalize the problem of computing disjoint paths with DCS in wavelength continuous networks and prove that it is NP-complete. We present an iterative heuristic that uses a layered graph model to compute disjoint paths with DCS and identify a free wavelength.
Resumo:
End users develop more software than any other group of programmers, using software authoring devices such as e-mail filtering editors, by-demonstration macro builders, and spreadsheet environments. Despite this, there has been little research on finding ways to help these programmers with the dependability of their software. We have been addressing this problem in several ways, one of which includes supporting end-user debugging activities through fault localization techniques. This paper presents the results of an empirical study conducted in an end-user programming environment to examine the impact of two separate factors in fault localization techniques that affect technique effectiveness. Our results shed new insights into fault localization techniques for end-user programmers and the factors that affect them, with significant implications for the evaluation of those techniques.
Resumo:
Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate offault detection, measures how quickly faults are detected within the testing process. In previous work we provided a metric, APFD, for measuring rate of fault detection, and techniques for prioritizing test cases to improve APFD, and reported the results of experiments using those techniques. This metric and these techniques, however, applied only in cases in which test costs and fault severity are uniform. In this paper, we present a new metric for assessing the rate of fault detection of prioritized test cases, that incorporates varying test case and fault costs. We present the results of a case study illustrating the application of the metric. This study raises several practical questions that might arise in applying test case prioritization; we discuss how practitioners could go about answering these questions.