7 resultados para fragmentation
em Boston University Digital Common
Resumo:
Shock wave lithotripsy is the preferred treatment modality for kidney stones in the United States. Despite clinical use for over twenty-five years, the mechanisms of stone fragmentation are still under debate. A piezoelectric array was employed to examine the effect of waveform shape and pressure distribution on stone fragmentation in lithotripsy. The array consisted of 170 elements placed on the inner surface of a 15 cm-radius spherical cap. Each element was driven independently using a 170 individual pulsers, each capable of generating 1.2 kV. The acoustic field was characterized using a fiber optic probe hydrophone with a bandwidth of 30 MHz and a spatial resolution of 100 μm. When all elements were driven simultaneously, the focal waveform was a shock wave with peak pressures p+ =65±3MPa and p−=−16±2MPa and the −6 dB focal region was 13 mm long and 2 mm wide. The delay for each element was the only control parameter for customizing the acoustic field and waveform shape, which was done with the aim of investigating the hypothesized mechanisms of stone fragmentation such as spallation, shear, squeezing, and cavitation. The acoustic field customization was achieved by employing the angular spectrum approach for modeling the forward wave propagation and regression of least square errors to determine the optimal set of delays. Results from the acoustic field customization routine and its implications on stone fragmentation will be discussed.
Resumo:
The popularity of TCP/IP coupled with the premise of high speed communication using Asynchronous Transfer Mode (ATM) technology have prompted the network research community to propose a number of techniques to adapt TCP/IP to ATM network environments. ATM offers Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services for best-effort traffic, such as conventional file transfer. However, recent studies have shown that TCP/IP, when implemented using ABR or UBR, leads to serious performance degradations, especially when the utilization of network resources (such as switch buffers) is high. Proposed techniques-switch-level enhancements, for example-that attempt to patch up TCP/IP over ATMs have had limited success in alleviating this problem. The major reason for TCP/IP's poor performance over ATMs has been consistently attributed to packet fragmentation, which is the result of ATM's 53-byte cell-oriented switching architecture. In this paper, we present a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. At the core of TCP Boston is the Adaptive Information Dispersal Algorithm (AIDA), an efficient encoding technique that allows for dynamic redundancy control. AIDA makes TCP/IP's performance less sensitive to cell losses, thus ensuring a graceful degradation of TCP/IP's performance when faced with congested resources. In this paper, we introduce AIDA and overview the main features of TCP Boston. We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time).
Resumo:
This dissertation narrates the historical development of American evangelical missions to the poor from 1947-2005 and analyzes the discourse of its main parachurch proponents, especially World Vision, Compassion International, Food for the Hungry, Samaritan's urse, Sojourners, Evangelicals for Social Action, and the Christian Community Development Association. Although recent scholarship on evangelicalism has been prolific, much of the historical work has focused on earlier periods. Sociological and political scientific scholarship on the postwar period has been attracted mostly to controversies surrounding the Religious Right, leaving evangelicalism's resurgent concern for the poor relatively understudied. This dissertation addresses these lacunae. The study consists of three chronological parts, each marked by a distinctive model of mission to the poor. First, the 1950s were characterized by compassionate charity for individual emergencies, a model that cohered neatly with evangelicalism's individualism and emotionalism. This model should be regarded as the quintessential, bedrock evangelical theory of mission to the poor. It remained strong throughout the entire postwar period. Second, in the 1970s, a strong countercurrent emerged that advocated for penitent protest against structural injustice and underdevelopment. In contrast to the first model, it was distinguished by going against the grain of many aspects of evangelical culture, especially its reflexive patriotism and individualism. Third, in the 1990s, an important movement towards developing potential through hopeful holism gained prominence. Its advocates were confident that their integration of biblical principles with insights from contemporary economic development praxis would contribute to drastic, widespread reductions in poverty. This model signaled a new optimism in evangelicalism's engagement with the broader world. The increasing prominence of missions to the poor within American evangelicalism led to dramatic changes within the movement's worldview: by 2005, evangelicals were mostly unified in their expressed concern for the physical and social needs of the poor, a position that radically reversed their immediate postwar worldview of near-exclusive focus on the spiritual needs of individuals. Nevertheless, missions to the poor also paralleled, reinforced, and hastened the increasing fragmentation of evangelicalism's identity, as each missional model advocated for highly variant approaches to poverty amelioration that were undergirded by diverse sociological, political, and theological assumptions.
Resumo:
This report summarizes the technical presentations and discussions that took place during RTDB'96: the First International Workshop on Real-Time Databases, which was held on March 7 and 8, 1996 in Newport Beach, California. The main goals of this project were to (1) review recent advances in real-time database systems research, (2) to promote interaction among real-time database researchers and practitioners, and (3) to evaluate the maturity and directions of real-time database technology.
Resumo:
High-speed networks, such as ATM networks, are expected to support diverse Quality of Service (QoS) constraints, including real-time QoS guarantees. Real-time QoS is required by many applications such as those that involve voice and video communication. To support such services, routing algorithms that allow applications to reserve the needed bandwidth over a Virtual Circuit (VC) have been proposed. Commonly, these bandwidth-reservation algorithms assign VCs to routes using the least-loaded concept, and thus result in balancing the load over the set of all candidate routes. In this paper, we show that for such reservation-based protocols|which allow for the exclusive use of a preset fraction of a resource's bandwidth for an extended period of time-load balancing is not desirable as it results in resource fragmentation, which adversely affects the likelihood of accepting new reservations. In particular, we show that load-balancing VC routing algorithms are not appropriate when the main objective of the routing protocol is to increase the probability of finding routes that satisfy incoming VC requests, as opposed to equalizing the bandwidth utilization along the various routes. We present an on-line VC routing scheme that is based on the concept of "load profiling", which allows a distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. We show the effectiveness of our load-profiling approach when compared to traditional load-balancing and load-packing VC routing schemes.
Resumo:
To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.
Resumo:
Quality of Service (QoS) guarantees are required by an increasing number of applications to ensure a minimal level of fidelity in the delivery of application data units through the network. Application-level QoS does not necessarily follow from any transport-level QoS guarantees regarding the delivery of the individual cells (e.g. ATM cells) which comprise the application's data units. The distinction between application-level and transport-level QoS guarantees is due primarily to the fragmentation that occurs when transmitting large application data units (e.g. IP packets, or video frames) using much smaller network cells, whereby the partial delivery of a data unit is useless; and, bandwidth spent to partially transmit the data unit is wasted. The data units transmitted by an application may vary in size while being constant in rate, which results in a variable bit rate (VBR) data flow. That data flow requires QoS guarantees. Statistical multiplexing is inadequate, because no guarantees can be made and no firewall property exists between different data flows. In this paper, we present a novel resource management paradigm for the maintenance of application-level QoS for VBR flows. Our paradigm is based on Statistical Rate Monotonic Scheduling (SRMS), in which (1) each application generates its variable-size data units at a fixed rate, (2) the partial delivery of data units is of no value to the application, and (3) the QoS guarantee extended to the application is the probability that an arbitrary data unit will be successfully transmitted through the network to/from the application.