2 resultados para disseminated intravascular coagulation

em Boston University Digital Common


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unstable arterial plaque is likely the key component of atherosclerosis, a disease which is responsible for two-thirds of heart attacks and strokes, leading to approximately 1 million deaths in the United States. Ultrasound imaging is able to detect plaque but as of yet is not able to distinguish unstable plaque from stable plaque. In this work a scanning acoustic microscope (SAM) was implemented and validated as tool to measure the acoustic properties of a sample. The goal for the SAM is to be able to provide quantitative measurements of the acoustic properties of different plaque types, to understand the physical basis by which plaque may be identified acoustically. The SAM consists of a spherically focused transducer which operates in pulse-echo mode and is scanned in a 2D raster pattern over a sample. A plane wave analysis is presented which allows the impedance, attenuation and phase velocity of a sample to be de- termined from measurements of the echoes from the front and back of the sample. After the measurements, the attenuation and phase velocity were analysed to ensure that they were consistent with causality. The backscatter coefficient of the samples was obtained using the technique outlined by Chen et al [8]. The transducer used here was able to determine acoustic properties from 10-40 MHz. The results for the impedance, attenuation and phase velocity were validated for high and low-density polyethylene against published results. The plane wave approximation was validated by measuring the properties throughout the focal region and throughout a range of incidence angles from the transducer. The SAM was used to characterize a set of recipes for tissue-mimicking phantoms which demonstrate indepen- dent control over the impedance, attenuation, phase velocity and backscatter coefficient. An initial feasibility study on a human artery was performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.