4 resultados para Optical interferometric method
em DigitalCommons@University of Nebraska - Lincoln
Resumo:
Optical networks based on passive-star couplers and employing WDM have been proposed for deployment in local and metropolitan areas. These networks suffer from splitting, coupling, and attenuation losses. Since there is an upper bound on transmitter power and a lower bound on receiver sensitivity, optical amplifiers are usually required to compensate for the power losses mentioned above. Due to the high cost of amplifiers, it is desirable to minimize their total number in the network. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus, optical amplifier placement becomes a challenging problem. In fact, the general problem of minimizing the total amplifier count is a mixed-integer nonlinear problem. Previous studies have attacked the amplifier-placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. This constraint simplifies the problem into a solvable mixed integer linear program. Unfortunately, this artificial constraint can miss feasible solutions that have a lower amplifier count but do not have the equally powered wavelengths constraint. In this paper, we present a method to solve the minimum amplifier- placement problem, while avoiding the equally powered wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required.
Resumo:
Optical networks based on passive star couplers and employing wavelength-division multiplexing (WDhf) have been proposed for deployment in local and metropolitan areas. Amplifiers are required in such networks to compensate for the power losses due to splitting and attenuation. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus optical amplifier placement becomes a challenging problem. The general problem of minimizing the total amplifier count, subject to the device constraints, is a mixed-integer non-linear problem. Previous studies have attacked the amplifier placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. In this paper, we present a method to solve the minimum amplifier- placement problem while avoiding the equally powered- wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required in several small to medium-sized networks.
Resumo:
Centralized and Distributed methods are two connection management schemes in wavelength convertible optical networks. In the earlier work, the centralized scheme is said to have lower network blocking probability than the distributed one. Hence, much of the previous work in connection management has focused on the comparison of different algorithms in only distributed scheme or in only centralized scheme. However, we believe that the network blocking probability of these two connection management schemes depends, to a great extent, on the network traffic patterns and reservation times. Our simulation results reveal that the performance improvement (in terms of blocking probability) of centralized method over distributed method is inversely proportional to the ratio of average connection interarrival time to reservation time. After that ratio increases beyond a threshold, those two connection management schemes yield almost the same blocking probability under the same network load. In this paper, we review the working procedure of distributed and centralized schemes, discuss the tradeoff between them, compare these two methods under different network traffic patterns via simulation and give our conclusion based on the simulation data.
Resumo:
Protecting a network against link failures is a major challenge faced by network operators. The protection scheme has to address two important objectives - fast recovery and minimizing the amount of backup resources needed. Every protection algorithm is a tradeoff between these two objectives. In this paper, we study the problem of segment protection. In particular, we investigate what is the optimal segment size that obtains the best tradeoff between the time taken for recovery and minimizing the bandwidth used by the backup segments. We focus on the uniform fixed-length segment protection method, where each primary path is divided into fixed-length segments, with the exception of the last segment in the path. We observe that the optimal segment size for a given network depends on several factors such as the topology and the ratio of the costs involved.