928 resultados para Message
Resumo:
The issue of ancestors has been controversial since the first encounters of Christianity with Shona religion. It remains a major theological problem that needs to be addressed within the mainline churches of Zimbabwe today. Instead of ignoring or dismissing the ancestor cult, which deeply influences the socio-political, religious, and economic lives of the Shona, churches in Zimbabwe should initiate a Christology that is based on it. Such a Christology would engage the critical day-to-day issues that make the Shona turn to their ancestors. Among these concerns are daily protection from misfortune, maintaining good health and increasing longevity, successful rainy seasons and food security, and responsible governance characterized by economic and political stability. Since the mid-16th century arrival of Jesuit missionaries in the Mutapa Kingdom, the Church has realized that many African Christians resorted to their ancestors in times of crisis. Although both Catholic and Protestant missionaries from the 1700s through the early 1900s fiercely attacked Shona traditional beliefs as superstitious and equated ancestors with evil spirits, the cult did not die. Social institutions, such as schools and hospitals provided by missionaries, failed to eliminate ancestral beliefs. Even in the 21st century, many Zimbabweans consult their ancestors. The Shona message to the church remains "Not without My Ancestors." This dissertation examines the significance of the ancestors to the Shona, and how selected denominations and new religious movements have interpreted and accommodated ancestral practices. Taking the missiological goal of "self-theologizing" as the framework, this dissertation proposes a "tripartite Christology" of "Jesus the Family Ancestor", "Jesus the Tribal Ancestor," and "Jesus the National Ancestor," which is based on the Shona "tripartite ancestrology." Familiar ecclesiological and liturgical language, idioms, and symbols are used to contribute to the wider Shona understanding of Jesus as the ancestor par excellence, in whom physical and spiritual needs-including those the ordinary ancestors fail to meet-are fulfilled.
Resumo:
Coherent shared memory is a convenient, but inefficient, method of inter-process communication for parallel programs. By contrast, message passing can be less convenient, but more efficient. To get the benefits of both models, several non-coherent memory behaviors have recently been proposed in the literature. We present an implementation of Mermera, a shared memory system that supports both coherent and non-coherent behaviors in a manner that enables programmers to mix multiple behaviors in the same program[HS93]. A programmer can debug a Mermera program using coherent memory, and then improve its performance by selectively reducing the level of coherence in the parts that are critical to performance. Mermera permits a trade-off of coherence for performance. We analyze this trade-off through measurements of our implementation, and by an example that illustrates the style of programming needed to exploit non-coherence. We find that, even on a small network of workstations, the performance advantage of non-coherence is compelling. Raw non-coherent memory operations perform 20-40~times better than non-coherent memory operations. An example application program is shown to run 5-11~times faster when permitted to exploit non-coherence. We conclude by commenting on our use of the Isis Toolkit of multicast protocols in implementing Mermera.
Resumo:
Parallel computing on a network of workstations can saturate the communication network, leading to excessive message delays and consequently poor application performance. We examine empirically the consequences of integrating a flow control protocol, called Warp control [Par93], into Mermera, a software shared memory system that supports parallel computing on distributed systems [HS93]. For an asynchronous iterative program that solves a system of linear equations, our measurements show that Warp succeeds in stabilizing the network's behavior even under high levels of contention. As a result, the application achieves a higher effective communication throughput, and a reduced completion time. In some cases, however, Warp control does not achieve the performance attainable by fixed size buffering when using a statically optimal buffer size. Our use of Warp to regulate the allocation of network bandwidth emphasizes the possibility for integrating it with the allocation of other resources, such as CPU cycles and disk bandwidth, so as to optimize overall system throughput, and enable fully-shared execution of parallel programs.
Resumo:
Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.
Resumo:
Communication and synchronization stand as the dual bottlenecks in the performance of parallel systems, and especially those that attempt to alleviate the programming burden by incurring overhead in these two domains. We formulate the notions of communicable memory and lazy barriers to help achieve efficient communication and synchronization. These concepts are developed in the context of BSPk, a toolkit library for programming networks of workstations|and other distributed memory architectures in general|based on the Bulk Synchronous Parallel (BSP) model. BSPk emphasizes efficiency in communication by minimizing local memory-to-memory copying, and in barrier synchronization by not forcing a process to wait unless it needs remote data. Both the message passing (MP) and distributed shared memory (DSM) programming styles are supported in BSPk. MP helps processes efficiently exchange short-lived unnamed data values, when the identity of either the sender or receiver is known to the other party. By contrast, DSM supports communication between processes that may be mutually anonymous, so long as they can agree on variable names in which to store shared temporary or long-lived data.
Resumo:
We present new, simple, efficient data structures for approximate reconciliation of set differences, a useful standalone primitive for peer-to-peer networks and a natural subroutine in methods for exact reconciliation. In the approximate reconciliation problem, peers A and B respectively have subsets of elements SA and SB of a large universe U. Peer A wishes to send a short message M to peer B with the goal that B should use M to determine as many elements in the set SB–SA as possible. To avoid the expense of round trip communication times, we focus on the situation where a single message M is sent. We motivate the performance tradeoffs between message size, accuracy and computation time for this problem with a straightforward approach using Bloom filters. We then introduce approximation reconciliation trees, a more computationally efficient solution that combines techniques from Patricia tries, Merkle trees, and Bloom filters. We present an analysis of approximation reconciliation trees and provide experimental results comparing the various methods proposed for approximate reconciliation.
Resumo:
Commonly, research work in routing for delay tolerant networks (DTN) assumes that node encounters are predestined, in the sense that they are the result of unknown, exogenous processes that control the mobility of these nodes. In this paper, we argue that for many applications such an assumption is too restrictive: while the spatio-temporal coordinates of the start and end points of a node's journey are determined by exogenous processes, the specific path that a node may take in space-time, and hence the set of nodes it may encounter could be controlled in such a way so as to improve the performance of DTN routing. To that end, we consider a setting in which each mobile node is governed by a schedule consisting of a ist of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged for DTN message delivery purposes. We define the Mobility Coordination Problem (MCP) for DTNs as follows: Given a set of nodes, each with its own schedule, and a set of messages to be exchanged between these nodes, devise a set of node encounters that minimize message delivery delays while satisfying all node schedules. The MCP for DTNs is general enough that it allows us to model and evaluate some of the existing DTN schemes, including data mules and message ferries. In this paper, we show that MCP for DTNs is NP-hard and propose two detour-based approaches to solve the problem. The first (DMD) is a centralized heuristic that leverages knowledge of the message workload to suggest specific detours to optimize message delivery. The second (DNE) is a distributed heuristic that is oblivious to the message workload, and which selects detours so as to maximize node encounters. We evaluate the performance of these detour-based approaches using extensive simulations based on synthetic workloads as well as real schedules obtained from taxi logs in a major metropolitan area. Our evaluation shows that our centralized, workload-aware DMD approach yields the best performance, in terms of message delay and delivery success ratio, and that our distributed, workload-oblivious DNE approach yields favorable performance when compared to approaches that require the use of data mules and message ferries.
Resumo:
We consider the problem of building robust fuzzy extractors, which allow two parties holding similar random variables W, W' to agree on a secret key R in the presence of an active adversary. Robust fuzzy extractors were defined by Dodis et al. in Crypto 2006 [6] to be noninteractive, i.e., only one message P, which can be modified by an unbounded adversary, can pass from one party to the other. This allows them to be used by a single party at different points in time (e.g., for key recovery or biometric authentication), but also presents an additional challenge: what if R is used, and thus possibly observed by the adversary, before the adversary has a chance to modify P. Fuzzy extractors secure against such a strong attack are called post-application robust. We construct a fuzzy extractor with post-application robustness that extracts a shared secret key of up to (2m−n)/2 bits (depending on error-tolerance and security parameters), where n is the bit-length and m is the entropy of W . The previously best known result, also of Dodis et al., [6] extracted up to (2m − n)/3 bits (depending on the same parameters).
Resumo:
The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.
Resumo:
Recent advances in processor speeds, mobile communications and battery life have enabled computers to evolve from completely wired to completely mobile. In the most extreme case, all nodes are mobile and communication takes place at available opportunities – using both traditional communication infrastructure as well as the mobility of intermediate nodes. These are mobile opportunistic networks. Data communication in such networks is a difficult problem, because of the dynamic underlying topology, the scarcity of network resources and the lack of global information. Establishing end-to-end routes in such networks is usually not feasible. Instead a store-and-carry forwarding paradigm is better suited for such networks. This dissertation describes and analyzes algorithms for forwarding of messages in such networks. In order to design effective forwarding algorithms for mobile opportunistic networks, we start by first building an understanding of the set of all paths between nodes, which represent the available opportunities for any forwarding algorithm. Relying on real measurements, we enumerate paths between nodes and uncover what we refer to as the path explosion effect. The term path explosion refers to the fact that the number of paths between a randomly selected pair of nodes increases exponentially with time. We draw from the theory of epidemics to model and explain the path explosion effect. This is the first contribution of the thesis, and is a key observation that underlies subsequent results. Our second contribution is the study of forwarding algorithms. For this, we rely on trace driven simulations of different algorithms that span a range of design dimensions. We compare the performance (success rate and average delay) of these algorithms. We make the surprising observation that most algorithms we consider have roughly similar performance. We explain this result in light of the path explosion phenomenon. While the performance of most algorithms we studied was roughly the same, these algorithms differed in terms of cost. This prompted us to focus on designing algorithms with the explicit intent of reducing costs. For this, we cast the problem of forwarding as an optimal stopping problem. Our third main contribution is the design of strategies based on optimal stopping principles which we refer to as Delegation schemes. Our analysis shows that using a delegation scheme reduces cost over naive forwarding by a factor of O(√N), where N is the number of nodes in the network. We further validate this result on real traces, where the cost reduction observed is even greater. Our results so far include a key assumption, which is unbounded buffers on nodes. Next, we relax this assumption, so that the problem shifts to one of prioritization of messages for transmission and dropping. Our fourth contribution is the study of message prioritization schemes, combined with forwarding. Our main result is that one achieves higher performance by assigning higher priorities to young messages in the network. We again interpret this result in light of the path explosion effect.
Resumo:
We revisit the problem of connection management for reliable transport. At one extreme, a pure soft-state (SS) approach (as in Delta-t [9]) safely removes the state of a connection at the sender and receiver once the state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, a hybrid hard-state/soft-state (HS+SS) approach (as in TCP) uses both explicit handshaking as well as timer-based management of the connection’s state. In this paper, we consider the worst-case scenario of reliable single-message communication, and develop a common analytical model that can be instantiated to capture either the SS approach or the HS+SS approach. We compare the two approaches in terms of goodput, message and state overhead. We also use simulations to compare against other approaches, and evaluate them in terms of correctness (with respect to data loss and duplication) and robustness to bad network conditions (high message loss rate and variable channel delays). Our results show that the SS approach is more robust, and has lower message overhead. On the other hand, SS requires more memory to keep connection states, which reduces goodput. Given memories are getting bigger and cheaper, SS presents the best choice over bandwidth-constrained, error-prone networks.
Resumo:
An increasing number of applications, such as distributed interactive simulation, live auctions, distributed games and collaborative systems, require the network to provide a reliable multicast service. This service enables one sender to reliably transmit data to multiple receivers. Reliability is traditionally achieved by having receivers send negative acknowledgments (NACKs) to request from the sender the retransmission of lost (or missing) data packets. However, this Automatic Repeat reQuest (ARQ) approach results in the well-known NACK implosion problem at the sender. Many reliable multicast protocols have been recently proposed to reduce NACK implosion. But, the message overhead due to NACK requests remains significant. Another approach, based on Forward Error Correction (FEC), requires the sender to encode additional redundant information so that a receiver can independently recover from losses. However, due to the lack of feedback from receivers, it is impossible for the sender to determine how much redundancy is needed. In this paper, we propose a new reliable multicast protocol, called ARM for Adaptive Reliable Multicast. Our protocol integrates ARQ and FEC techniques. The objectives of ARM are (1) reduce the message overhead due to NACK requests, (2) reduce the amount of data transmission, and (3) reduce the time it takes for all receivers to receive the data intact (without loss). During data transmission, the sender periodically informs the receivers of the number of packets that are yet to be transmitted. Based on this information, each receiver predicts whether this amount is enough to recover its losses. Only if it is not enough, that the receiver requests the sender to encode additional redundant packets. Using ns simulations, we show the superiority of our hybrid ARQ-FEC protocol over the well-known Scalable Reliable Multicast (SRM) protocol.
Resumo:
The lives of Thomas and Anna Haslam were dedicated to the attainment of women's equality. They were feminists before the word was coined. In an era when respectable women were not supposed to know of the existence of prostitutes, Anna became empowered to do the unthinkable, not only to speak in public but to discuss openly matters sexual and to attack the double standard of sexuality which was enshrined in the official treatment of prostitutes. Their life-long commitment to the cause of women's suffrage never faltered, despite the repeated discouragement of the fate of bills defeated in the House of Commons. The Haslams represented an Ireland which did not survive them. While they were dedicated to the union with Westminster, they worked happily with those who applied themselves to its destruction. Although in many ways they exemplified the virtues of their Quaker backgrounds, they did not subscribe to any organised religion. Despite living in straitened circumstances, they were part of an urban intellectual elite and participated in the social and cultural life of Dublin for over fifty years. It is tempting to speculate how the Haslams would have fared in post independence Ireland. Hanna Sheehy Skeffington who had impeccable nationalist credentials, was effectively marginalised. It is likely that they would have protested against discriminatory legislation in their usual law abiding manner but, in a country which quickly developed an overwhelmingly Roman Catholic ethos, would they have had a voice or a constituency? Ironically, Thomas's teaching on chastity would have found favour with the hierarchy; his message was disseminated in a simple and more pious manner in numerous Catholic Truth Society pamphlets. The Protestant minority never sought to subvert the institutions of the state, was careful not to criticise and kept its collective head down. Dáil Éireann was not bombarded with petitions for the restoration of divorce facilities or the unbanning of birth control. Those who sought such amenities obtained them quietly 'in another jurisdiction.' Fifty years were to pass before the condom wielding 'comely maidens' erupted on to the front pages of the Sunday papers. They were, one imagines, the spiritual descendants of the militant rather than the constitutional suffrage movement. "Once and for all we need to commit ourselves to the concept that women's rights are not factional or sectional privileges, bestowed on the few at the whim of the many. They are human rights. In a society in which the rights and potential of women are constrained no man can be truly free." These words spoken by Mary Robinson as President of Ireland are an echo of the principles to which the Haslams dedicated their lives and are, perhaps, a tribute to their efforts.
Resumo:
Obesity has been defined as a consequence of energy imbalance, where energy intake exceeds energy expenditure and results in a build-up of adipose tissue. However, this scientific definition masks the complicated social meanings associated with the condition. This research investigated the construction of meaning around obesity at various levels of inquiry to inform how obesity is portrayed and understood in Ireland. A multi-paradigmatic approach was adopted, drawing on theory and methods from psychology and sociology and an analytical framework combining the Common Sense Model and framing theory was employed. In order to examine the exo-level meanings of obesity, content analysis was performed on two media data sets (n=479, n=346) and a thematic analysis was also performed on the multiple newspaper sample (n=346). At the micro-level, obesity discourses were investigated via the thematic analysis of comments sampled from an online message board. Finally, an online survey assessed individual-level beliefs and understandings of obesity. The media analysis revealed that individual blame for obesity was pervasive and the behavioural frame was dominant. A significant increase in attention to obesity over time was observed, manifestations of weight stigma were common, and there was an emotive discourse of blame directed towards the parents of obese children. The micro-level analysis provided insight into the weight-based stigma in society and a clear set of negative ‘default’ judgements accompanied the obese label. The survey analysis confirmed that the behavioural frame was the dominant means of understanding obesity. One of the strengths of this thesis is the link created between framing and the Common Sense Model in the development of an analytical framework for application in the examination of health/illness representations. This approach helped to ascertain the extent of the pervasive biomedical and individual blame discourse on obesity, which establishes the basis for the stigmatisation of obese persons.
Resumo:
Polymorphic microsatellite DNA loci were used here in three studies, one on Salmo salar and two on S. trutta. In the case of S. salar, the survival of native fish and non-natives from a nearby catchment, and their hybrids, were compared in a freshwater common garden experiment and subsequently in ocean ranching, with parental assignment utilising microsatellites. Overall survival of non-natives was 35% of natives. This differential survival was mainly in the oceanic phase. These results imply a genetic basis and suggest local adaptation can occur in salmonids across relatively small geographic distances which may have important implications for the management of salmon populations. In the first case study with S trutta, the species was investigated throughout its spread as an invasive in Newfoundland, eastern Canada. Genetic investigation confirmed historical records that the majority of introductions were from a Scottish hatchery and provided a clear example of the structure of two expanding waves of spread along coasts, probably by natural straying of anadromous individuals, to the north and south of the point of human introduction. This study showed a clearer example of the genetic anatomy of an invasion than in previous studies with brown trout, and may have implications for the management of invasive species in general. Finally, the genetics of anadromous S. trutta from the Waterville catchment in south western Ireland were studied. Two significantly different population groupings, from tributaries in geographically distinct locations entering the largest lake in the catchment, were identified. These results were then used to assign very large rod caught sea trout individuals (so called “specimen” sea trout) back to region of origin, in a Genetic Stock Identification exercise. This suggested that the majority of these large sea trout originated from one of the two tributary groups. These results are relevant for the understanding of sea trout population dynamics and for the future management of this and other sea trout producing catchments. This thesis has demonstrated new insights into the population structuring of salmonids both between and within catchments. While these chapters look at the existence and scale of genetic variation from different angles, it might be concluded that the overarching message from this thesis should be to highlight the importance of maintaining genetic diversity in salmonid populations as vital for their long-term productivity and resilience.