980 resultados para link capacity
Resumo:
We consider the problem of efficiently and fairly allocating bandwidth at a highly congested link to a diverse set of flows, including TCP flows with various Round Trip Times (RTT), non-TCP-friendly flows such as Constant-Bit-Rate (CBR) applications using UDP, misbehaving, or malicious flows. Though simple, a FIFO queue management is vulnerable. Fair Queueing (FQ) can guarantee max-min fairness but fails at efficiency. RED-PD exploits the history of RED's actions in preferentially dropping packets from higher-rate flows. Thus, RED-PD attempts to achieve fairness at low cost. By relying on RED's actions, RED-PD turns out not to be effective in dealing with non-adaptive flows in settings with a highly heterogeneous mix of flows. In this paper, we propose a new approach we call RED-NB (RED with No Bias). RED-NB does not rely on RED's actions. Rather it explicitly maintains its own history for the few high-rate flows. RED-NB then adaptively adjusts flow dropping probabilities to achieve max-min fairness. In addition, RED-NB helps RED itself at very high loads by tuning RED's dropping behavior to the flow characteristics (restricted in this paper to RTTs) to eliminate its bias against long-RTT TCP flows while still taking advantage of RED's features at low loads. Through extensive simulations, we confirm the fairness of RED-NB and show that it outperforms RED, RED-PD, and CHOKe in all scenarios.
Resumo:
Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.
Resumo:
We consider the problem of delivering popular streaming media to a large number of asynchronous clients. We propose and evaluate a cache-and-relay end-system multicast approach, whereby a client joining a multicast session caches the stream, and if needed, relays that stream to neighboring clients which may join the multicast session at some later time. This cache-and-relay approach is fully distributed, scalable, and efficient in terms of network link cost. In this paper we analytically derive bounds on the network link cost of our cache-and-relay approach, and we evaluate its performance under assumptions of limited client bandwidth and limited client cache capacity. When client bandwidth is limited, we show that although finding an optimal solution is NP-hard, a simple greedy algorithm performs surprisingly well in that it incurs network link costs that are very close to a theoretical lower bound. When client cache capacity is limited, we show that our cache-and-relay approach can still significantly reduce network link cost. We have evaluated our cache-and-relay approach using simulations over large, synthetic random networks, power-law degree networks, and small-world networks, as well as over large real router-level Internet maps.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
We propose Trade & Cap (T&C), an economics-inspired mechanism that incentivizes users to voluntarily coordinate their consumption of the bandwidth of a shared resource (e.g., a DSLAM link) so as to converge on what they perceive to be an equitable allocation, while ensuring efficient resource utilization. Under T&C, rather than acting as an arbiter, an Internet Service Provider (ISP) acts as an enforcer of what the community of rational users sharing the resource decides is a fair allocation of that resource. Our T&C mechanism proceeds in two phases. In the first, software agents acting on behalf of users engage in a strategic trading game in which each user agent selfishly chooses bandwidth slots to reserve in support of primary, interactive network usage activities. In the second phase, each user is allowed to acquire additional bandwidth slots in support of presumed open-ended need for fluid bandwidth, catering to secondary applications. The acquisition of this fluid bandwidth is subject to the remaining "buying power" of each user and by prevalent "market prices" – both of which are determined by the results of the trading phase and a desirable aggregate cap on link utilization. We present analytical results that establish the underpinnings of our T&C mechanism, including game-theoretic results pertaining to the trading phase, and pricing of fluid bandwidth allocation pertaining to the capping phase. Using real network traces, we present extensive experimental results that demonstrate the benefits of our scheme, which we also show to be practical by highlighting the salient features of an efficient implementation architecture.
Resumo:
In this work, we conducted extensive active measurements on a large nationwide CDMA2000 1xRTT network in order to characterize the impact of both the Radio Link Protocol and more importantly, the wireless scheduler, on TCP. Our measurements include standard TCP/UDP logs, as well as detailed RF layer statistics that allow observability into RF dynamics. With the help of a robust correlation measure, normalized mutual information, we were able to quantify the impact of these two RF factors on TCP performance metrics such as the round trip time, packet loss rate, instantaneous throughput etc. We show that the variable channel rate has the larger impact on TCP behavior when compared to the Radio Link Protocol. Furthermore, we expose and rank the factors that influence the assigned channel rate itself and in particular, demonstrate the sensitivity of the wireless scheduler to the data sending rate. Thus, TCP is adapting its rate to match the available network capacity, while the rate allocated by the wireless scheduler is influenced by the sender's behavior. Such a system is best described as a closed loop system with two feedback controllers, the TCP controller and the wireless scheduler, each one affecting the other's decisions. In this work, we take the first steps in characterizing such a system in a realistic environment.
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
The global proportion of older persons is increasing rapidly. Diet and the intestinal microbiota independently and jointly contribute to health in the elderly. The habitual dietary patterns and functional microbiota components of elderly subjects were investigated in order to identify specific effector mechanisms. A study of the dietary intake of Irish community-dwelling elderly subjects showed that the consumption of foods high in fat and/or sugar was excessive, while consumption of dairy foods was inadequate. Elderly females typically had a more nutrient- dense diet than males and a considerable proportion of subjects, particularly males, had inadequate intakes of calcium, magnesium, vitamin D, folate, zinc and vitamin C. The association between dietary patterns, glycaemic index and cognitive function was also investigated. Elderly subjects consuming ‘prudent’ dietary patterns had better cognitive function compared to those consuming ‘Western’ dietary patterns. Furthermore, fully-adjusted regression models revealed that a high glycaemic diet was associated with poor cognitive function, demonstrating a new link between nutrition and cognition. An extensive screening study of the elderly faecal-derived microbiota was also undertaken to examine the prevalence of antimicrobial production by intestinal bacteria. A number of previously characterised bacteriocins were isolated (gassericin T, ABP-118, mutacin II, enterocin L-50 and enterocin P) in this study. Interestingly, a Lactobacillus crispatus strain was found to produce a potentially novel antimicrobial compound. Full genome sequencing of this strain revealed the presence of three loci which exhibited varying degrees of homology with the genes responsible for helveticin J production in Lb. helveticus. An additional study comparing the immunomodulatory capacity of ‘viable’ and ‘non-viable’ Bifidobacterium strains found that Bifidobacterium-fermented milks (BFMs) containing ‘non-viable’ cells could stimulate levels of IL-10 and TNF-α in a manner similar to those stimulated by BFMs containing ‘viable’ cells in vitro.
Resumo:
While numerous studies find that deep-saline sandstone aquifers in the United States could store many decades worth of the nation's current annual CO 2 emissions, the likely cost of this storage (i.e. the cost of storage only and not capture and transport costs) has been harder to constrain. We use publicly available data of key reservoir properties to produce geo-referenced rasters of estimated storage capacity and cost for regions within 15 deep-saline sandstone aquifers in the United States. The rasters reveal the reservoir quality of these aquifers to be so variable that the cost estimates for storage span three orders of magnitude and average>$100/tonne CO 2. However, when the cost and corresponding capacity estimates in the rasters are assembled into a marginal abatement cost curve (MACC), we find that ~75% of the estimated storage capacity could be available for<$2/tonne. Furthermore, ~80% of the total estimated storage capacity in the rasters is concentrated within just two of the aquifers-the Frio Formation along the Texas Gulf Coast, and the Mt. Simon Formation in the Michigan Basin, which together make up only ~20% of the areas analyzed. While our assessment is not comprehensive, the results suggest there should be an abundance of low-cost storage for CO 2 in deep-saline aquifers, but a majority of this storage is likely to be concentrated within specific regions of a smaller number of these aquifers. © 2011 Elsevier B.V.
Resumo:
Adult humans, infants, pre-school children, and non-human animals appear to share a system of approximate numerical processing for non-symbolic stimuli such as arrays of dots or sequences of tones. Behavioral studies of adult humans implicate a link between these non-symbolic numerical abilities and symbolic numerical processing (e.g., similar distance effects in accuracy and reaction-time for arrays of dots and Arabic numerals). However, neuroimaging studies have remained inconclusive on the neural basis of this link. The intraparietal sulcus (IPS) is known to respond selectively to symbolic numerical stimuli such as Arabic numerals. Recent studies, however, have arrived at conflicting conclusions regarding the role of the IPS in processing non-symbolic, numerosity arrays in adulthood, and very little is known about the brain basis of numerical processing early in development. Addressing the question of whether there is an early-developing neural basis for abstract numerical processing is essential for understanding the cognitive origins of our uniquely human capacity for math and science. Using functional magnetic resonance imaging (fMRI) at 4-Tesla and an event-related fMRI adaptation paradigm, we found that adults showed a greater IPS response to visual arrays that deviated from standard stimuli in their number of elements, than to stimuli that deviated in local element shape. These results support previous claims that there is a neurophysiological link between non-symbolic and symbolic numerical processing in adulthood. In parallel, we tested 4-y-old children with the same fMRI adaptation paradigm as adults to determine whether the neural locus of non-symbolic numerical activity in adults shows continuity in function over development. We found that the IPS responded to numerical deviants similarly in 4-y-old children and adults. To our knowledge, this is the first evidence that the neural locus of adult numerical cognition takes form early in development, prior to sophisticated symbolic numerical experience. More broadly, this is also, to our knowledge, the first cognitive fMRI study to test healthy children as young as 4 y, providing new insights into the neurophysiology of human cognitive development.
Resumo:
Tissue-engineered skeletal muscle can serve as a physiological model of natural muscle and a potential therapeutic vehicle for rapid repair of severe muscle loss and injury. Here, we describe a platform for engineering and testing highly functional biomimetic muscle tissues with a resident satellite cell niche and capacity for robust myogenesis and self-regeneration in vitro. Using a mouse dorsal window implantation model and transduction with fluorescent intracellular calcium indicator, GCaMP3, we nondestructively monitored, in real time, vascular integration and the functional state of engineered muscle in vivo. During a 2-wk period, implanted engineered muscle exhibited a steady ingrowth of blood-perfused microvasculature along with an increase in amplitude of calcium transients and force of contraction. We also demonstrated superior structural organization, vascularization, and contractile function of fully differentiated vs. undifferentiated engineered muscle implants. The described in vitro and in vivo models of biomimetic engineered muscle represent enabling technology for novel studies of skeletal muscle function and regeneration.
Resumo:
It is commonly accepted that aerobic exercise increases hippocampal neurogenesis, learning and memory, as well as stress resiliency. However, human populations are widely variable in their inherent aerobic fitness as well as their capacity to show increased aerobic fitness following a period of regimented exercise. It is unclear whether these inherent or acquired components of aerobic fitness play a role in neurocognition. To isolate the potential role of inherent aerobic fitness, we exploited a rat model of high (HCR) and low (LCR) inherent aerobic capacity for running. At a baseline, HCR rats have two- to three-fold higher aerobic capacity than LCR rats. We found that HCR rats also had two- to three- fold more young neurons in the hippocampus than LCR rats as well as rats from the heterogeneous founder population. We then asked whether this enhanced neurogenesis translates to enhanced hippocampal cognition, as is typically seen in exercise-trained animals. Compared to LCR rats, HCR rats performed with high accuracy on tasks designed to test neurogenesis-dependent pattern separation ability by examining investigatory behavior between very similar objects or locations. To investigate whether an aerobic response to exercise is required for exercise-induced changes in neurogenesis and cognition, we utilized a rat model of high (HRT) and low (LRT) aerobic response to treadmill training. At a baseline, HRT and LRT rats have comparable aerobic capacity as measured by a standard treadmill fit test, yet after a standardized training regimen, HRT but not LRT rats robustly increase their aerobic capacity for running. We found that sedentary LRT and HRT rats had equivalent levels of hippocampal neurogenesis, but only HRT rats had an elevation in the number of young neurons in the hippocampus following training, which was positively correlated with accuracy on pattern separation tasks. Taken together, these data suggest that a significant elevation in aerobic capacity is necessary for exercise-induced hippocampal neurogenesis and hippocampal neurogenesis-dependent learning and memory. To investigate the potential for high aerobic capacity to be neuroprotective, doxorubicin chemotherapy was administered to LCR and HCR rats. While doxorubicin induces a progressive decrease in aerobic capacity as well as neurogenesis, HCR rats remain at higher levels on those measures compared to even saline-treated LCR rats. HCR and LCR rats that received exercise training throughout doxorubicin treatment demonstrated positive effects of exercise on aerobic capacity and neurogenesis, regardless of inherent aerobic capacity. Overall, these findings demonstrate that inherent and acquired components of aerobic fitness play a crucial role not only in the cardiorespiratory system but also the fitness of the brain.
Resumo:
Immune responses are highly energy-dependent processes. Activated T cells increase glucose uptake and aerobic glycolysis to survive and function. Malnutrition and starvation limit nutrients and are associated with immune deficiency and increased susceptibility to infection. Although it is clear that immunity is suppressed in times of nutrient stress, mechanisms that link systemic nutrition to T cell function are poorly understood. We show in this study that fasting leads to persistent defects in T cell activation and metabolism, as T cells from fasted animals had low glucose uptake and decreased ability to produce inflammatory cytokines, even when stimulated in nutrient-rich media. To explore the mechanism of this long-lasting T cell metabolic defect, we examined leptin, an adipokine reduced in fasting that regulates systemic metabolism and promotes effector T cell function. We show that leptin is essential for activated T cells to upregulate glucose uptake and metabolism. This effect was cell intrinsic and specific to activated effector T cells, as naive T cells and regulatory T cells did not require leptin for metabolic regulation. Importantly, either leptin addition to cultured T cells from fasted animals or leptin injections to fasting animals was sufficient to rescue both T cell metabolic and functional defects. Leptin-mediated metabolic regulation was critical, as transgenic expression of the glucose transporter Glut1 rescued cytokine production of T cells from fasted mice. Together, these data demonstrate that induction of T cell metabolism upon activation is dependent on systemic nutritional status, and leptin links adipocytes to metabolically license activated T cells in states of nutritional sufficiency.
Resumo:
info:eu-repo/semantics/inPress
Resumo:
info:eu-repo/semantics/published