957 resultados para bandwidth


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel integration technique has been developed using band-gap energy control of InGaAsP/InGaAsP multiquantum-well (MQW) structures during simultaneous ultra-low-pressure (22 mbar) selective-area-growth (SAG) process in metal-organic chemical vapour deposition. A fundamental study of the controllability of band gap energy by the SAG method is performed. A large band-gap photoluminescence wavelength shift of 83nm is obtained with a small mask width variation (0-30μm). The method is then applied to fabricate an MQW distributed-feedback laser monolithically integrated with an electroabsorption modulator. The experimental results exhibit superior device characteristics with low threshold of 19mA, over 24 dB extinction ratio when coupled into a single mode fibre. More than 10 GHz modulation bandwidth is also achieved, which demonstrates that the ultra-low-pressure SAG technique is a promising approach for high-speed transmission photonic integrated circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A monolithic photoreceiver which consists of a double photodiode (DPD) detector and a regulated cascade(RGC) transimpedance amplifier (TIA) is designed. The small signal circuit model of DPD is given and the band width design method of a monolithic photoreceiver is presented. An important factor which limits the bandwidth of DPD detector and the photoreceiver is presented and analyzed in detail. A monolithic photoreceiver with 1.71GHz bandwidth and 49dB transimpedance gain is designed and simulated by applying a low-cost 0. 6um CMOS process and the test result is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An accurate technique for measuring the frequency response of semiconductor laser diode chips is proposed and experimentally demonstrated. The effects of test jig parasites can be completely removed in the measurement by a new calibration method. In theory, the measuring range of the measurement system is only determined by the measuring range of the instruments network analyzer and photo detector. Diodes' bandwidth of 7.5GHz and 10GHz is measured. The results reveal that the method is feasible and comparing with other method, it is more precise andeasier to use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel approach to achieving a polarization-insensitive semiconductor optical amplifier is presented. The active layer consists of graded tensile strained bulk-like structure. which can not only enhance TM mode material gain and further realize polarization-insensitivity, but also get a large 3dB bandwidth due to different strain introduced into the active layer. 3dB bandwidth more than 40nm. 65nm has been obtained in die experiment and theory, respectively. The characteristics of such polarization insensitive structure have been analyzed, The influence of the amount of strain and of the thickness of strain layer on the polarization insensitivity has been discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electroabsorption (EA) modulator integrated with partially gain coupling distributed feedback (DFB) lasers have been fabricated and shown high single mode yield and wavelength stability. The small signal bandwidth is about 7.5 GHz. Strained Si1-chiGechi/Si multiple quantum well (MQW) resonant-cavity enhanced (RCE) photodetectors with SiO2/Si distributed Bragg reflector (DBR) as the mirrors have been fabricated and shown a clear narrow bandwidth response. The external quantum efficiency at 1.3 mum is measured to be about 3.5% under reverse bias of 16 V. A novel GaInNAs/GaAs MQW RCE p-i-n photodetector with high reflectance GaAs/ALAs DBR mirrors has also been demonstrated and shown the selectively detecting function with the FWHM of peak response of 12 nm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is an author-created, un-copyedited version of an article accepted for publication in Acta Physica Polonica A. The Version of Record is available online at http://przyrbwn.icm.edu.pl/APP/PDF/118/a118z2p31.pdf

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of efficiently and fairly allocating bandwidth at a highly congested link to a diverse set of flows, including TCP flows with various Round Trip Times (RTT), non-TCP-friendly flows such as Constant-Bit-Rate (CBR) applications using UDP, misbehaving, or malicious flows. Though simple, a FIFO queue management is vulnerable. Fair Queueing (FQ) can guarantee max-min fairness but fails at efficiency. RED-PD exploits the history of RED's actions in preferentially dropping packets from higher-rate flows. Thus, RED-PD attempts to achieve fairness at low cost. By relying on RED's actions, RED-PD turns out not to be effective in dealing with non-adaptive flows in settings with a highly heterogeneous mix of flows. In this paper, we propose a new approach we call RED-NB (RED with No Bias). RED-NB does not rely on RED's actions. Rather it explicitly maintains its own history for the few high-rate flows. RED-NB then adaptively adjusts flow dropping probabilities to achieve max-min fairness. In addition, RED-NB helps RED itself at very high loads by tuning RED's dropping behavior to the flow characteristics (restricted in this paper to RTTs) to eliminate its bias against long-RTT TCP flows while still taking advantage of RED's features at low loads. Through extensive simulations, we confirm the fairness of RED-NB and show that it outperforms RED, RED-PD, and CHOKe in all scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The best-effort nature of the Internet poses a significant obstacle to the deployment of many applications that require guaranteed bandwidth. In this paper, we present a novel approach that enables two edge/border routers-which we call Internet Traffic Managers (ITM)-to use an adaptive number of TCP connections to set up a tunnel of desirable bandwidth between them. The number of TCP connections that comprise this tunnel is elastic in the sense that it increases/decreases in tandem with competing cross traffic to maintain a target bandwidth. An origin ITM would then schedule incoming packets from an application requiring guaranteed bandwidth over that elastic tunnel. Unlike many proposed solutions that aim to deliver soft QoS guarantees, our elastic-tunnel approach does not require any support from core routers (as with IntServ and DiffServ); it is scalable in the sense that core routers do not have to maintain per-flow state (as with IntServ); and it is readily deployable within a single ISP or across multiple ISPs. To evaluate our approach, we develop a flow-level control-theoretic model to study the transient behavior of established elastic TCP-based tunnels. The model captures the effect of cross-traffic connections on our bandwidth allocation policies. Through extensive simulations, we confirm the effectiveness of our approach in providing soft bandwidth guarantees. We also outline our kernel-level ITM prototype implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Replication is a commonly proposed solution to problems of scale associated with distributed services. However, when a service is replicated, each client must be assigned a server. Prior work has generally assumed that assignment to be static. In contrast, we propose dynamic server selection, and show that it enables application-level congestion avoidance. To make dynamic server selection practical, we demonstrate the use of three tools. In addition to direct measurements of round-trip latency, we introduce and validate two new tools: bprobe, which estimates the maximum possible bandwidth along a given path; and cprobe, which estimates the current congestion along a path. Using these tools we demonstrate dynamic server selection and compare it to previous static approaches. We show that dynamic server selection consistently outperforms static policies by as much as 50%. Furthermore, we demonstrate the importance of each of our tools in performing dynamic server selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate measurement of network bandwidth is crucial for flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. These applications must do so to perform tasks such as distributing and delivering high-bandwidth media, scheduling service requests and performing admission control. Extensive work has focused on two approaches to measuring bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper, we develop and simulate end-to-end probing methods which can measure bottleneck bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows. As another important contribution, we describe a number of practical applications which we foresee as standing to benefit from solutions to this problem, especially in emerging, flexible network architectures such as overlay networks, ad-hoc networks, peer-to-peer architectures and massively accessed content servers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose Trade & Cap (T&C), an economics-inspired mechanism that incentivizes users to voluntarily coordinate their consumption of the bandwidth of a shared resource (e.g., a DSLAM link) so as to converge on what they perceive to be an equitable allocation, while ensuring efficient resource utilization. Under T&C, rather than acting as an arbiter, an Internet Service Provider (ISP) acts as an enforcer of what the community of rational users sharing the resource decides is a fair allocation of that resource. Our T&C mechanism proceeds in two phases. In the first, software agents acting on behalf of users engage in a strategic trading game in which each user agent selfishly chooses bandwidth slots to reserve in support of primary, interactive network usage activities. In the second phase, each user is allowed to acquire additional bandwidth slots in support of presumed open-ended need for fluid bandwidth, catering to secondary applications. The acquisition of this fluid bandwidth is subject to the remaining "buying power" of each user and by prevalent "market prices" – both of which are determined by the results of the trading phase and a desirable aggregate cap on link utilization. We present analytical results that establish the underpinnings of our T&C mechanism, including game-theoretic results pertaining to the trading phase, and pricing of fluid bandwidth allocation pertaining to the capping phase. Using real network traces, we present extensive experimental results that demonstrate the benefits of our scheme, which we also show to be practical by highlighting the salient features of an efficient implementation architecture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of recent studies have pointed out that TCP's performance over ATM networks tends to suffer, especially under congestion and switch buffer limitations. Switch-level enhancements and link-level flow control have been proposed to improve TCP's performance in ATM networks. Selective Cell Discard (SCD) and Early Packet Discard (EPD) ensure that partial packets are discarded from the network "as early as possible", thus reducing wasted bandwidth. While such techniques improve the achievable throughput, their effectiveness tends to degrade in multi-hop networks. In this paper, we introduce Lazy Packet Discard (LPD), an AAL-level enhancement that improves effective throughput, reduces response time, and minimizes wasted bandwidth for TCP/IP over ATM. In contrast to the SCD and EPD policies, LPD delays as much as possible the removal from the network of cells belonging to a partially communicated packet. We outline the implementation of LPD and show the performance advantage of TCP/LPD, compared to plain TCP and TCP/EPD through analysis and simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important factor for high-speed optical communication is the availability of ultrafast and low-noise photodetectors. Among the semiconductor photodetectors that are commonly used in today’s long-haul and metro-area fiber-optic systems, avalanche photodiodes (APDs) are often preferred over p-i-n photodiodes due to their internal gain, which significantly improves the receiver sensitivity and alleviates the need for optical pre-amplification. Unfortunately, the random nature of the very process of carrier impact ionization, which generates the gain, is inherently noisy and results in fluctuations not only in the gain but also in the time response. Recently, a theory characterizing the autocorrelation function of APDs has been developed by us which incorporates the dead-space effect, an effect that is very significant in thin, high-performance APDs. The research extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. In this research, we describe our experiences in parallelizing the code in MPI and OpenMP using CAPTools. Several array partitioning schemes and scheduling policies are implemented and tested. Our results show that the code is scalable up to 64 processors on a SGI Origin 2000 machine and has small average errors.