8 resultados para random loss
em Boston University Digital Common
Resumo:
Background Chronic illness and premature mortality from malaria, water-borne diseases, and respiratory illnesses have long been known to diminish the welfare of individuals and households in developing countries. Previous research has also shown that chronic diseases among farming populations suppress labor productivity and agricultural output. As the illness and death toll from HIV/AIDS continues to climb in most of sub-Saharan Africa, concern has arisen that the loss of household labor it causes will reduce crop yields, impoverish farming households, intensify malnutrition, and suppress growth in the agricultural sector. If chronic morbidity and premature mortality among individuals in farming households have substantial impacts on household production, and if a large number of households are affected, it is possible that an increase in morbidity and mortality from HIV/AIDS or other diseases could affect national aggregate output and exports. If, on the other hand, the impact at the household farm level is modest, or if relatively few households are affected, there is likely to be little effect on aggregate production across an entire country. Which of these outcomes is more likely in West Africa is unknown. Little rigorous, quantitative research has been published on the impacts of AIDS on smallholder farm production, particularly in West Africa. The handful of studies that have been conducted have looked mainly at small populations in areas of very high HIV prevalence in southern and eastern Africa. Conclusions about how HIV/AIDS, and other causes of chronic morbidity and mortality, are affecting agriculture across the continent cannot be drawn from these studies. In view of the importance of agriculture, and particularly smallholder agriculture, in the economies of most African countries and the scarcity of resources for health interventions, it is valuable to identify, describe, and quantify the impact of chronic morbidity and mortality on smallholder production of important crops in West Africa. One such crop is cocoa. In Ghana, cocoa is a crop of national importance that is produced almost exclusively by smallholder households. In 2003, Ghana was the world’s second-largest producer of cocoa. Cocoa accounted for a quarter of Ghana’s export revenues that year and generated 15 percent of employment. The success and growth of the cocoa industry is thus vital to the country’s overall social and economic development. Study Objectives and Methods In February and March 2005, the Center for International Health and Development of Boston University (CIHD) and the Department of Agricultural Economics and Agribusiness (DAEA) of the University of Ghana, with financial support from the Africa Bureau of the U.S. Agency for International Development and from Mars, Inc., which is a major purchaser of West African cocoa, conducted a survey of a random sample of cocoa farming households in the Western Region of Ghana. The survey documented the extent of chronic morbidity and mortality in cocoa growing households in the Western Region of Ghana, the country’s largest cocoa growing region, and analyzed the impact of morbidity and mortality on cocoa production. It aimed to answer three specific research questions. (1) What is the baseline status of the study population in terms of household size and composition, acute and chronic morbidity, recent mortality, and cocoa production? (2) What is the relationship between household size and cocoa production, and how can this relationship be used to understand the impact of adult mortality and chronic morbidity on the production of cocoa at the household level? The study population was the approximately 42,000 cocoa farming households in the southern part of Ghana’s Western Region. A random sample of households was selected from a roster of eligible households developed from existing administrative information. Under the supervision of the University of Ghana field team, enumerators were graduate students of the Department of Agricultural Economics and Agribusiness or employees of the Cocoa Services Division. A total of 632 eligible farmers participated in the survey. Of these, 610 provided complete responses to all questions needed to complete the multivariate statistical analysis reported here.
Resumo:
For a given TCP flow, exogenous losses are those occurring on links other than the flow's bottleneck link. Exogenous losses are typically viewed as introducing undesirable "noise" into TCP's feedback control loop, leading to inefficient network utilization and potentially severe global unfairness. This has prompted much research on mechanisms for hiding such losses from end-points. In this paper, we show through analysis and simulations that low levels of exogenous losses are surprisingly beneficial in that they improve stability and convergence, without sacrificing efficiency. Based on this, we argue that exogenous loss awareness should be taken into account in any AQM design that aims to achieve global fairness. To that end, we propose an exogenous-loss aware Queue Management (XQM) that actively accounts for and leverages exogenous losses. We use an equation based approach to derive the quiescent loss rate for a connection based on the connection's profile and its global fair share. In contrast to other queue management techniques, XQM ensures that a connection sees its quiescent loss rate, not only by complementing already existing exogenous losses, but also by actively hiding exogenous losses, if necessary, to achieve global fairness. We establish the advantages of exogenous-loss awareness using extensive simulations in which, we contrast the performance of XQM to that of a host of traditional exogenous-loss unaware AQM techniques.
Resumo:
One of TCP's critical tasks is to determine which packets are lost in the network, as a basis for control actions (flow control and packet retransmission). Modern TCP implementations use two mechanisms: timeout, and fast retransmit. Detection via timeout is necessarily a time-consuming operation; fast retransmit, while much quicker, is only effective for a small fraction of packet losses. In this paper we consider the problem of packet loss detection in TCP more generally. We concentrate on the fact that TCP's control actions are necessarily triggered by inference of packet loss, rather than conclusive knowledge. This suggests that one might analyze TCP's packet loss detection in a standard inferencing framework based on probability of detection and probability of false alarm. This paper makes two contributions to that end: First, we study an example of more general packet loss inference, namely optimal Bayesian packet loss detection based on round trip time. We show that for long-lived flows, it is frequently possible to achieve high detection probability and low false alarm probability based on measured round trip time. Second, we construct an analytic performance model that incorporates general packet loss inference into TCP. We show that for realistic detection and false alarm probabilities (as are achievable via our Bayesian detector) and for moderate packet loss rates, the use of more general packet loss inference in TCP can improve throughput by as much as 25%.
Resumo:
Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.
Resumo:
End-to-End differentiation between wireless and congestion loss can equip TCP control so it operates effectively in a hybrid wired/wireless environment. Our approach integrates two techniques: packet loss pairs (PLP) and Hidden Markov Modeling (HMM). A packet loss pair is formed by two back-to-back packets, where one packet is lost while the second packet is successfully received. The purpose is for the second packet to carry the state of the network path, namely the round trip time (RTT), at the time the other packet is lost. Under realistic conditions, PLP provides strong differentiation between congestion and wireless type of loss based on distinguishable RTT distributions. An HMM is then trained so observed RTTs can be mapped to model states that represent either congestion loss or wireless loss. Extensive simulations confirm the accuracy of our HMM-based technique in classifying the cause of a packet loss. We also show the superiority of our technique over the Vegas predictor, which was recently found to perform best and which exemplifies other existing loss labeling techniques.
Resumo:
The current congestion-oriented design of TCP hinders its ability to perform well in hybrid wireless/wired networks. We propose a new improvement on TCP NewReno (NewReno-FF) using a new loss labeling technique to discriminate wireless from congestion losses. The proposed technique is based on the estimation of average and variance of the round trip time using a filter cal led Flip Flop filter that is augmented with history information. We show the comparative performance of TCP NewReno, NewReno-FF, and TCP Westwood through extensive simulations. We study the fundamental gains and limits using TCP NewReno with varying Loss Labeling accuracy (NewReno-LL) as a benchmark. Lastly our investigation opens up important research directions. First, there is a need for a finer grained classification of losses (even within congestion and wireless losses) for TCP in heterogeneous networks. Second, it is essential to develop an appropriate control strategy for recovery after the correct classification of a packet loss.
Resumo:
A secure sketch (defined by Dodis et al.) is an algorithm that on an input w produces an output s such that w can be reconstructed given its noisy version w' and s. Security is defined in terms of two parameters m and m˜ : if w comes from a distribution of entropy m, then a secure sketch guarantees that the distribution of w conditioned on s has entropy m˜ , where λ = m−m˜ is called the entropy loss. In this note we show that the entropy loss of any secure sketch (or, more generally, any randomized algorithm) on any distribution is no more than it is on the uniform distribution.
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.