995 resultados para ACIAR Jon Allwright Fellowship


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND:Zambia was the first African country to change national antimalarial treatment policy to artemisinin-based combination therapy - artemether-lumefantrine. An evaluation during the early implementation phase revealed low readiness of health facilities and health workers to deliver artemether-lumefantrine, and worryingly suboptimal treatment practices. Improvements in the case-management of uncomplicated malaria two years after the initial evaluation and three years after the change of policy in Zambia are reported.METHODS:Data collected during the health facility surveys undertaken in 2004 and 2006 at all outpatient departments of government and mission facilities in four Zambian districts were analysed. The surveys were cross-sectional, using a range of quality of care assessment methods. The main outcome measures were changes in health facility and health worker readiness to deliver artemether-lumefantrine, and changes in case-management practices for children below five years of age presenting with uncomplicated malaria as defined by national guidelines.RESULTS:In 2004, 94 health facilities, 103 health workers and 944 consultations for children with uncomplicated malaria were evaluated. In 2006, 104 facilities, 135 health workers and 1125 consultations were evaluated using the same criteria of selection. Health facility and health worker readiness improved from 2004 to 2006: availability of artemether-lumefantrine from 51% (48/94) to 60% (62/104), presence of artemether-lumefantrine dosage wall charts from 20% (19/94) to 75% (78/104), possession of guidelines from 58% (60/103) to 92% (124/135), and provision of in-service training from 25% (26/103) to 41% (55/135). The proportions of children with uncomplicated malaria treated with artemether-lumefantrine also increased from 2004 to 2006: from 1% (6/527) to 27% (149/552) in children weighing 5 to 9 kg, and from 11% (42/394) to 42% (231/547) in children weighing 10 kg or more. In both weight groups and both years, 22% (441/2020) of children with uncomplicated malaria were not prescribed any antimalarial drug.CONCLUSION:Although significant improvements in malaria case-management have occurred over two years in Zambia, the quality of treatment provided at the point of care is not yet optimal. Strengthening weak health systems and improving the delivery of effective interventions should remain high priority in all countries implementing new treatment policies for malaria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the meanings and significance of the seventh-day Sabbath for worship in the Seventh-day Adventist Church. In recent years, both the day and concept of Sabbath have attracted ecumenical attention, but the focus of scholarship has been placed on Sunday as the Lord's Day or Sabbath with little consideration given to the seventh-day Sabbath. In contrast, this project examines the seventh-day Sabbath and worship on that day from theological, liturgical, biblical and historical perspectives. Although not intended as an apology for Seventh-day Adventist practices, the work does strive to promote a critical and creative conversation with other theological and liturgical traditions in order to promote mutual, ecumenical understanding. Historical research into the origins and nature of the principal day for weekly Christian worship provides a starting point for discussion on Sabbath. Reconsideration of the relationship between Judaism and early Christianity in recent studies suggests that the influence of Judaism lasted longer than previously supposed, thereby prolonging the developmental process of Sabbath (seventh day) to Sunday. A possible coexistence of Sabbath and Sunday in early Christianity offers an alternative to perspectives that dichotomize Sabbath and Sunday from Christian antiquity onward, and thus for the Seventh-day Adventist Church, suggests biblical and historical validity for their Sabbath worship practice. Recent theological perspectives on Sabbath and Sunday are examined, particularly those of Karl Barth, Jürgen Moltmann and Pope John Paul II. While all three of these theologians stress the continuity of Sabbath and Sunday and speak mainly to a theology of Sunday, they do highlight the significance of Sabbath—which is relevant to an interpretation of seventh-day Sabbath worship. The study concludes that the seventh-day Sabbath is significant for worship in the Seventh-day Adventist Church because it symbolizes the relationship between God and human beings, reminds humanity of the creating and redeeming God who acts in history, and invites persons to rest and fellowship with God on a day sanctified by God.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sound propagation in shallow water is characterized by interaction with the oceans surface, volume, and bottom. In many coastal margin regions, including the Eastern U.S. continental shelf and the coastal seas of China, the bottom is composed of a depositional sandy-silty top layer. Previous measurements of narrow and broadband sound transmission at frequencies from 100 Hz to 1 kHz in these regions are consistent with waveguide calculations based on depth and frequency dependent sound speed, attenuation and density profiles. Theoretical predictions for the frequency dependence of attenuation vary from quadratic for the porous media model of M.A. Biot to linear for various competing models. Results from experiments performed under known conditions with sandy bottoms, however, have agreed with attenuation proportional to f1.84, which is slightly less than the theoretical value of f2 [Zhou and Zhang, J. Acoust. Soc. Am. 117, 2494]. This dissertation presents a reexamination of the fundamental considerations in the Biot derivation and leads to a simplification of the theory that can be coupled with site-specific, depth dependent attenuation and sound speed profiles to explain the observed frequency dependence. Long-range sound transmission measurements in a known waveguide can be used to estimate the site-specific sediment attenuation properties, but the costs and time associated with such at-sea experiments using traditional measurement techniques can be prohibitive. Here a new measurement tool consisting of an autonomous underwater vehicle and a small, low noise, towed hydrophone array was developed and used to obtain accurate long-range sound transmission measurements efficiently and cost effectively. To demonstrate this capability and to determine the modal and intrinsic attenuation characteristics, experiments were conducted in a carefully surveyed area in Nantucket Sound. A best-fit comparison between measured results and calculated results, while varying attenuation parameters, revealed the estimated power law exponent to be 1.87 between 220.5 and 1228 Hz. These results demonstrate the utility of this new cost effective and accurate measurement system. The sound transmission results, when compared with calculations based on the modified Biot theory, are shown to explain the observed frequency dependence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the problem of learning disjunctions of counting functions, which are general cases of parity and modulo functions, with equivalence and membership queries. We prove that, for any prime number p, the class of disjunctions of integer-weighted counting functions with modulus p over the domain Znq (or Zn) for any given integer q ≥ 2 is polynomial time learnable using at most n + 1 equivalence queries, where the hypotheses issued by the learner are disjunctions of at most n counting functions with weights from Zp. The result is obtained through learning linear systems over an arbitrary field. In general a counting function may have a composite modulus. We prove that, for any given integer q ≥ 2, over the domain Zn2, the class of read-once disjunctions of Boolean-weighted counting functions with modulus q is polynomial time learnable with only one equivalence query, and the class of disjunctions of log log n Boolean-weighted counting functions with modulus q is polynomial time learnable. Finally, we present an algorithm for learning graph-based counting functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the efficient learnability of unions of k rectangles in the discrete plane (1,...,n)[2] with equivalence and membership queries. We exhibit a learning algorithm that learns any union of k rectangles with O(k^3log n) queries, while the time complexity of this algorithm is bounded by O(k^5log n). We design our learning algorithm by finding "corners" and "edges" for rectangles contained in the target concept and then constructing the target concept from those "corners" and "edges". Our result provides a first approach to on-line learning of nontrivial subclasses of unions of intersections of halfspaces with equivalence and membership queries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formal correctness of complex multi-party network protocols can be difficult to verify. While models of specific fixed compositions of agents can be checked against design constraints, protocols which lend themselves to arbitrarily many compositions of agents-such as the chaining of proxies or the peering of routers-are more difficult to verify because they represent potentially infinite state spaces and may exhibit emergent behaviors which may not materialize under particular fixed compositions. We address this challenge by developing an algebraic approach that enables us to reduce arbitrary compositions of network agents into a behaviorally-equivalent (with respect to some correctness property) compact, canonical representation, which is amenable to mechanical verification. Our approach consists of an algebra and a set of property-preserving rewrite rules for the Canonical Homomorphic Abstraction of Infinite Network protocol compositions (CHAIN). Using CHAIN, an expression over our algebra (i.e., a set of configurations of network protocol agents) can be reduced to another behaviorally-equivalent expression (i.e., a smaller set of configurations). Repeated applications of such rewrite rules produces a canonical expression which can be checked mechanically. We demonstrate our approach by characterizing deadlock-prone configurations of HTTP agents, as well as establishing useful properties of an overlay protocol for scheduling MPEG frames, and of a protocol for Web intra-cache consistency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examine the question of whether to employ the first-come-first-served (FCFS) discipline or the processor-sharing (PS) discipline at the hosts in a distributed server system. We are interested in the case in which service times are drawn from a heavy-tailed distribution, and so have very high variability. Traditional wisdom when task sizes are highly variable would prefer the PS discipline, because it allows small tasks to avoid being delayed behind large tasks in a queue. However, we show that system performance can actually be significantly better under FCFS queueing, if each task is assigned to a host based on the task's size. By task assignment, we mean an algorithm that inspects incoming tasks and assigns them to hosts for service. The particular task assignment policy we propose is called SITA-E: Size Interval Task Assignment with Equal Load. Surprisingly, under SITA-E, FCFS queueing typically outperforms the PS discipline by a factor of about two, as measured by mean waiting time and mean slowdown (waiting time of task divided by its service time). We compare the FCFS/SITA-E policy to the processor-sharing case analytically; in addition we compare it to a number of other policies in simulation. We show that the benefits of SITA-E are present even in small-scale distributed systems (four or more hosts). Furthermore, SITA-E is a static policy that does not incorporate feedback knowledge of the state of the hosts, which allows for a simple and scalable implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that non-duplicating formulations of intersection (and union) types would not achieve significantly better space complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize throughput of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmitted packet. We first make the case for transmitting encoded content in this scenario, arguing for the digital fountain approach which enables end-hosts to efficiently restitute the original content of size n from a subset of any n symbols from a large universe of encoded symbols. Such an approach affords reliability and a substantial degree of application-level flexibility, as it seamlessly tolerates packet loss, connection migration, and parallel transfers. However, since the sets of symbols acquired by peers are likely to overlap substantially, care must be taken to enable them to collaborate effectively. We provide a collection of useful algorithmic tools for efficient estimation, summarization, and approximate reconciliation of sets of symbols between pairs of collaborating peers, all of which keep messaging complexity and computation to a minimum. Through simulations and experiments on a prototype implementation, we demonstrate the performance benefits of our informed content delivery mechanisms and how they complement existing overlay network architectures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present new, simple, efficient data structures for approximate reconciliation of set differences, a useful standalone primitive for peer-to-peer networks and a natural subroutine in methods for exact reconciliation. In the approximate reconciliation problem, peers A and B respectively have subsets of elements SA and SB of a large universe U. Peer A wishes to send a short message M to peer B with the goal that B should use M to determine as many elements in the set SB–SA as possible. To avoid the expense of round trip communication times, we focus on the situation where a single message M is sent. We motivate the performance tradeoffs between message size, accuracy and computation time for this problem with a straightforward approach using Bloom filters. We then introduce approximation reconciliation trees, a more computationally efficient solution that combines techniques from Patricia tries, Merkle trees, and Bloom filters. We present an analysis of approximation reconciliation trees and provide experimental results comparing the various methods proposed for approximate reconciliation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.