958 resultados para Network loss
Resumo:
Complexity of mufflers generally introduces considerable pressure drop, which affects the engine performance adversely. Not much literature is available for pressure drop across perforates. In this paper, the stagnation pressure drop across perforated muffler elements has been measured experimentally and generalized expressions have been developed for the pressure loss across cross-flow expansion and cross-flow contraction elements. A flow resistance model available in the literature has been made use of to analytically determine the flow distribution and thereby the pressure drop of mufflers. A generalized expression has been derived here for evaluation of the equivalent flow resistance for parallel flow paths. Expressions for flow resistance across perforated elements, derived by means of flow experiments, have been implemented in the flow resistance network. The results have been validated with experimental data. Thus, the newly developed integrated flow resistance networks would enable us to determine the normalized stagnation pressure drop of commercial automotive mufflers, thus enabling an efficient flow-acoustic design of silencing systems.
Resumo:
A 2-D SW-banyan network is introduced by properly folding the 1-D SW-banyan network, and its corresponding optical setup is proposed by means of polarizing beamsplitters and 2-D phase spatial light modulators. Then, based on the characteristics and the proposed optical setup, the control for the routing path between any source-destination pair is given, and the method to determine whether a given permutation is permissible or not is discussed. Because the proposed optical setup consists of only optical polarization elements, it is compact in structure, its corresponding energy loss and crosstalk are low, and its corresponding available number of channels is high. (C) 1996 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Coherent ecological networks (EN) composed of core areas linked by ecological corridors are being developed worldwide with the goal of promoting landscape connectivity and biodiversity conservation. However, empirical assessment of the performance of EN designs is critical to evaluate the utility of these networks to mitigate effects of habitat loss and fragmentation. Landscape genetics provides a particularly valuable framework to address the question of functional connectivity by providing a direct means to investigate the effects of landscape structure on gene flow. The goals of this study are (1) to evaluate the landscape features that drive gene flow of an EN target species (European pine marten), and (2) evaluate the optimality of a regional EN design in providing connectivity for this species within the Basque Country (North Spain). Using partial Mantel tests in a reciprocal causal modeling framework we competed 59 alternative models, including isolation by distance and the regional EN. Our analysis indicated that the regional EN was among the most supported resistance models for the pine marten, but was not the best supported model. Gene flow of pine marten in northern Spain is facilitated by natural vegetation, and is resisted by anthropogenic landcover types and roads. Our results suggest that the regional EN design being implemented in the Basque Country will effectively facilitate gene flow of forest dwelling species at regional scale.
Resumo:
Reef fishes are conspicuous and essential components of coral reef ecosystems and economies of southern Florida and the United States Virgin Islands (USVI). Throughout Florida and the USVI, reef fish are under threat from a variety of anthropogenic and natural stressors including overfishing, habitat loss, and environmental changes. The South Florida/Caribbean Network (SFCN), a unit of the National Park Service (NPS), is charged with monitoring reef fishes, among other natural and cultural resources, within six parks in the South Florida - Caribbean region (Biscayne National Park, BISC; Buck Island Reef National Monument, BUIS; Dry Tortugas National Park, DRTO; Everglades National Park, EVER; Salt River Bay National Historic Park and Ecological Preserve, SARI; Virgin Islands National Park, VIIS). Monitoring data is intended for park managers who are and will continue to be asked to make decisions to balance environmental protection, fishery sustainability and park use by visitors. The range and complexity of the issues outlined above, and the need for NPS to invest in a strategy of monitoring, modeling, and management to ensure the sustainability of its precious assets, will require strategic investment in long-term, high-precision, multispecies reef fish data that increases inherent system knowledge and reduces uncertainty. The goal of this guide is to provide the framework for park managers and researchers to create or enhance a reef fish monitoring program within areas monitored by the SFCN. The framework is expected to be applicable to other areas as well, including the Florida Keys National Marine Sanctuary and Virgin Islands Coral Reef National Monument. The favored approach is characterized by an iterative process of data collection, dataset integration, sampling design analysis, and population and community assessment that evaluates resource risks associated with management policies. Using this model, a monitoring program can adapt its survey methods to increase accuracy and precision of survey estimates as new information becomes available, and adapt to the evolving needs and broadening responsibilities of park management.
Resumo:
This paper extends a state projection method for structure preserving model reduction to situations where only a weaker notion of system structure is available. This weaker notion of structure, identifying the causal relationship between manifest variables of the system, is especially relevant is settings such as systems biology, where a clear partition of state variables into distinct subsystems may be unknown, or not even exist. The resulting technique, like similar approaches, does not provide theoretical performance guarantees, so an extensive computational study is conducted, and it is observed to work fairly well in practice. Moreover, conditions characterizing structurally minimal realizations and sufficient conditions characterizing edge loss resulting from the reduction process, are presented. ©2009 IEEE.
Resumo:
How do neurons develop, control, and maintain their electrical signaling properties in spite of ongoing protein turnover and perturbations to activity? From generic assumptions about the molecular biology underlying channel expression, we derive a simple model and show how it encodes an "activity set point" in single neurons. The model generates diverse self-regulating cell types and relates correlations in conductance expression observed in vivo to underlying channel expression rates. Synaptic as well as intrinsic conductances can be regulated to make a self-assembling central pattern generator network; thus, network-level homeostasis can emerge from cell-autonomous regulation rules. Finally, we demonstrate that the outcome of homeostatic regulation depends on the complement of ion channels expressed in cells: in some cases, loss of specific ion channels can be compensated; in others, the homeostatic mechanism itself causes pathological loss of function.
Resumo:
Proceeding from the consideration of the demands from the functional architecture of high speed, high capacity optical communication network, this paper points out that photonic integrated devices, including high speed response laser source, narrow band response photodetector high speed wavelength converter, dense wavelength multi/demultiplexer, low loss high speed response photo-switch and multi-beam coupler are the key components in the system. The, investigation progress in the laboratory will be introduced.
Resumo:
A new series of network liquid crystal polymers were synthesized by graft copolymerization of the difunctional mesogenic monomer 4-allyloxy-benzoyloxy-4'-allyloxybiphenyl (M) upon polymethylhydrosiloxane (PMHS). Monomer M acted not only as a mesogenic unit but also as a crosslinker for the network polymers. The chemical structures of the polymers were confirmed by IR spectroscopy. DSC, TGA, and X-ray scattering were used to measure their thermal properties and mesogenic properties. The glass transition temperature (T-g) of these network liquid crystal polymers was increased when the monomer was increased, and T-d (temperature of 5% weight loss) at first went up and reached a maximum at P, then went down. The slightly crosslinked polymers (P, P,) show rubber-like elasticity, so it was called liquid-crystal elastomer. Network polymers will lose elasticity property with a highly crosslinked degree, and turn into thermosetting polymers (P-4, P-5). All polymers exhibited a smectic texture by X-ray scattering.
Resumo:
The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.
Resumo:
For a given TCP flow, exogenous losses are those occurring on links other than the flow's bottleneck link. Exogenous losses are typically viewed as introducing undesirable "noise" into TCP's feedback control loop, leading to inefficient network utilization and potentially severe global unfairness. This has prompted much research on mechanisms for hiding such losses from end-points. In this paper, we show through analysis and simulations that low levels of exogenous losses are surprisingly beneficial in that they improve stability and convergence, without sacrificing efficiency. Based on this, we argue that exogenous loss awareness should be taken into account in any AQM design that aims to achieve global fairness. To that end, we propose an exogenous-loss aware Queue Management (XQM) that actively accounts for and leverages exogenous losses. We use an equation based approach to derive the quiescent loss rate for a connection based on the connection's profile and its global fair share. In contrast to other queue management techniques, XQM ensures that a connection sees its quiescent loss rate, not only by complementing already existing exogenous losses, but also by actively hiding exogenous losses, if necessary, to achieve global fairness. We establish the advantages of exogenous-loss awareness using extensive simulations in which, we contrast the performance of XQM to that of a host of traditional exogenous-loss unaware AQM techniques.
Resumo:
One of TCP's critical tasks is to determine which packets are lost in the network, as a basis for control actions (flow control and packet retransmission). Modern TCP implementations use two mechanisms: timeout, and fast retransmit. Detection via timeout is necessarily a time-consuming operation; fast retransmit, while much quicker, is only effective for a small fraction of packet losses. In this paper we consider the problem of packet loss detection in TCP more generally. We concentrate on the fact that TCP's control actions are necessarily triggered by inference of packet loss, rather than conclusive knowledge. This suggests that one might analyze TCP's packet loss detection in a standard inferencing framework based on probability of detection and probability of false alarm. This paper makes two contributions to that end: First, we study an example of more general packet loss inference, namely optimal Bayesian packet loss detection based on round trip time. We show that for long-lived flows, it is frequently possible to achieve high detection probability and low false alarm probability based on measured round trip time. Second, we construct an analytic performance model that incorporates general packet loss inference into TCP. We show that for realistic detection and false alarm probabilities (as are achievable via our Bayesian detector) and for moderate packet loss rates, the use of more general packet loss inference in TCP can improve throughput by as much as 25%.
Resumo:
Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.
Resumo:
The development and deployment of distributed network-aware applications and services over the Internet require the ability to compile and maintain a model of the underlying network resources with respect to (one or more) characteristic properties of interest. To be manageable, such models must be compact, and must enable a representation of properties along temporal, spatial, and measurement resolution dimensions. In this paper, we propose a general framework for the construction of such metric-induced models using end-to-end measurements. We instantiate our approach using one such property, packet loss rates, and present an analytical framework for the characterization of Internet loss topologies. From the perspective of a server the loss topology is a logical tree rooted at the server with clients at its leaves, in which edges represent lossy paths between a pair of internal network nodes. We show how end-to-end unicast packet probing techniques could b e used to (1) infer a loss topology and (2) identify the loss rates of links in an existing loss topology. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. We report on simulation, implementation, and Internet deployment results that show the effectiveness of our approach and its robustness in terms of its accuracy and convergence over a wide range of network conditions.
Resumo:
Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.
Resumo:
End-to-End differentiation between wireless and congestion loss can equip TCP control so it operates effectively in a hybrid wired/wireless environment. Our approach integrates two techniques: packet loss pairs (PLP) and Hidden Markov Modeling (HMM). A packet loss pair is formed by two back-to-back packets, where one packet is lost while the second packet is successfully received. The purpose is for the second packet to carry the state of the network path, namely the round trip time (RTT), at the time the other packet is lost. Under realistic conditions, PLP provides strong differentiation between congestion and wireless type of loss based on distinguishable RTT distributions. An HMM is then trained so observed RTTs can be mapped to model states that represent either congestion loss or wireless loss. Extensive simulations confirm the accuracy of our HMM-based technique in classifying the cause of a packet loss. We also show the superiority of our technique over the Vegas predictor, which was recently found to perform best and which exemplifies other existing loss labeling techniques.