16 resultados para ILL service
em Boston University Digital Common
Resumo:
A Research Report from the "Organizing Religious Work Project," Hartford Institute for Religion Research Hartford Seminary
Resumo:
The print copy of this sermon is held by Pitts Theology Library. The Pitts Theology Library's digital copy was produced as part of the ATLA/ATS Cooperative Digital Resources Initiative (CDRI), funded by the Luce Foundation. Reproduction note: Electronic reproduction. Atlanta, Georgia : Pitts Theology Library, Emory University, 2003. (Thanksgiving Day Sermons, ATLA Cooperative Digital Resources Initiative, CDRI). Joint CDRI project by: Andover-Harvard Library (Harvard Divinity School), Pitts Theology Library (Emory University), and Princeton Theological Seminary Libraries.
Resumo:
http://www.archive.org/details/lightsandshades00bhwuoft
Resumo:
http://www.archive.org/details/theparishpriesto00heusuoft
Resumo:
http://www.archive.org/details/callqualificatio00studuoft
Resumo:
Background: The loss of working-aged adults to HIV/AIDS has been shown to increase the costs of labor to the private sector in Africa. There is little corresponding evidence for the public sector. This study evaluated the impact of AIDS on the capacity of a government agency, the Zambia Wildlife Authority (ZAWA), to patrol Zambia’s national parks. Methods: Data were collected from ZAWA on workforce characteristics, recent mortality, costs, and the number of days spent on patrol between 2003 and 2005 by a sample of 76 current patrol officers (reference subjects) and 11 patrol officers who died of AIDS or suspected AIDS (index subjects). An estimate was made of the impact of AIDS on service delivery capacity and labor costs and the potential net benefits of providing treatment. Results: Reference subjects spent an average of 197.4 days on patrol per year. After adjusting for age, years of service, and worksite, index subjects spent 62.8 days on patrol in their last year of service (68% decrease, p<0.0001), 96.8 days on patrol in their second to last year of service (51% decrease, p<0.0001), and 123.7 days on patrol in their third to last year of service (37% decrease, p<0.0001). For each employee who died, ZAWA lost an additional 111 person-days for management, funeral attendance, vacancy, and recruitment and training of a replacement, resulting in a total productivity loss per death of 2.0 person-years. Each AIDS-related death also imposed budgetary costs for care, benefits, recruitment, and training equivalent to 3.3 years’ annual compensation. In 2005, AIDS reduced service delivery capacity by 6.2% and increased labor costs by 9.7%. If antiretroviral therapy could be provided for $500/patient/year, net savings to ZAWA would approach $285,000/year. Conclusion: AIDS is constraining ZAWA’s ability to protect Zambia’s wildlife and parks. Impacts on this government agency are substantially larger than have been observed in the private sector. Provision of ART would result in net budgetary savings to ZAWA and greatly increase its service delivery capacity.
Resumo:
This study explores the effectiveness of a Church-based recovery program for the mentally ill in Korea where many Christian communities view mental illness as evidence of sin. Building on theological and psychological literature, an empirical study was conducted with participants in the alternative program of the Han-ma-um community. Data analysis revealed that this program, which views mental disorders as illness rather than sin, helps participants build self-respect and enables families to provide support as they move toward recovery. Based on this empirical examination, recommendations for refinement and expansion of the program and avenues for future research are proposed.
Resumo:
This project examines the challenges military chaplains face when leading Gospel services in the United States Air Force in both domestic and deployed locations. It argues that some chaplains assigned to Gospel services do not have the ministry skills set to lead them effectively. Through quantitative and qualitative research methods involving surveys of 30 military chaplains, lay leaders and parishioners, and follow-up interviews to explore critical issues identified by leaders and congregants alike, this project develops a Gospel service manual. This instructional primer outlines the historical evolution of the Gospel service and addresses its integral elements of worship and challenges that chaplains need to understand to meet the worship needs of multicultural and ecumenical military congregations.
Resumo:
This paper presents a new approach to window-constrained scheduling, suitable for multimedia and weakly-hard real-time systems. We originally developed an algorithm, called Dynamic Window-Constrained Scheduling (DWCS), that attempts to guarantee no more than x out of y deadlines are missed for real-time jobs such as periodic CPU tasks, or delay-constrained packet streams. While DWCS is capable of generating a feasible window-constrained schedule that utilizes 100% of resources, it requires all jobs to have the same request periods (or intervals between successive service requests). We describe a new algorithm called Virtual Deadline Scheduling (VDS), that provides window-constrained service guarantees to jobs with potentially different request periods, while still maximizing resource utilization. VDS attempts to service m out of k job instances by their virtual deadlines, that may be some finite time after the corresponding real-time deadlines. Notwithstanding, VDS is capable of outperforming DWCS and similar algorithms, when servicing jobs with potentially different request periods. Additionally, VDS is able to limit the extent to which a fraction of all job instances are serviced late. Results from simulations show that VDS can provide better window-constrained service guarantees than other related algorithms, while still having as good or better delay bounds for all scheduled jobs. Finally, an implementation of VDS in the Linux kernel compares favorably against DWCS for a range of scheduling loads.
Resumo:
Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.
Resumo:
Speculative service implies that a client's request for a document is serviced by sending, in addition to the document requested, a number of other documents (or pointers thereto) that the server speculates will be requested by the client in the near future. This speculation is based on statistical information that the server maintains for each document it serves. The notion of speculative service is analogous to prefetching, which is used to improve cache performance in distributed/parallel shared memory systems, with the exception that servers (not clients) control when and what to prefetch. Using trace simulations based on the logs of our departmental HTTP server http://cs-www.bu.edu, we show that both server load and service time could be reduced considerably, if speculative service is used. This is above and beyond what is currently achievable using client-side caching [3] and server-side dissemination [2]. We identify a number of parameters that could be used to fine-tune the level of speculation performed by the server.
Resumo:
The SafeWeb anonymizing system has been lauded by the press and loved by its users; self-described as "the most widely used online privacy service in the world," it served over 3,000,000 page views per day at its peak. SafeWeb was designed to defeat content blocking by firewalls and to defeat Web server attempts to identify users, all without degrading Web site behavior or requiring users to install specialized software. In this article we describe how these fundamentally incompatible requirements were realized in SafeWeb's architecture, resulting in spectacular failure modes under simple JavaScript attacks. These exploits allow adversaries to turn SafeWeb into a weapon against its users, inflicting more damage on them than would have been possible if they had never relied on SafeWeb technology. By bringing these problems to light, we hope to remind readers of the chasm that continues to separate popular and technical notions of security.
Resumo:
The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.
Resumo:
The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.
Resumo:
The effectiveness of service provisioning in largescale networks is highly dependent on the number and location of service facilities deployed at various hosts. The classical, centralized approach to determining the latter would amount to formulating and solving the uncapacitated k-median (UKM) problem (if the requested number of facilities is fixed), or the uncapacitated facility location (UFL) problem (if the number of facilities is also to be optimized). Clearly, such centralized approaches require knowledge of global topological and demand information, and thus do not scale and are not practical for large networks. The key question posed and answered in this paper is the following: "How can we determine in a distributed and scalable manner the number and location of service facilities?" We propose an innovative approach in which topology and demand information is limited to neighborhoods, or balls of small radius around selected facilities, whereas demand information is captured implicitly for the remaining (remote) clients outside these neighborhoods, by mapping them to clients on the edge of the neighborhood; the ball radius regulates the trade-off between scalability and performance. We develop a scalable, distributed approach that answers our key question through an iterative reoptimization of the location and the number of facilities within such balls. We show that even for small values of the radius (1 or 2), our distributed approach achieves performance under various synthetic and real Internet topologies that is comparable to that of optimal, centralized approaches requiring full topology and demand information.