13 resultados para mimicking attack

em Boston University Digital Common


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The deposition of ultrasonic energy in tissue can cause tissue damage due to local heating. For pressures above a critical threshold, cavitation will occur in tissue and bubbles will be created. These oscillating bubbles can induce a much larger thermal energy deposition in the local region. Traditionally, clinicians and researchers have not exploited this bubble-enhanced heating since cavitation behavior is erratic and very difficult to control. The present work is an attempt to control and utilize this bubble-enhanced heating. First, by applying appropriate bubble dynamic models, limits on the asymptotic bubble size distribution are obtained for different driving pressures at 1 MHz. The size distributions are bounded by two thresholds: the bubble shape instability threshold and the rectified diffusion threshold. The growth rate of bubbles in this region is also given, and the resulting time evolution of the heating in a given insonation scenario is modeled. In addition, some experimental results have been obtained to investigate the bubble-enhanced heating in an agar and graphite based tissue- mimicking material. Heating as a function of dissolved gas concentrations in the tissue phantom is investigated. Bubble-based contrast agents are introduced to investigate the effect on the bubble-enhanced heating, and to control the initial bubble size distribution. The mechanisms of cavitation-related bubble heating are investigated, and a heating model is established using our understanding of the bubble dynamics. By fitting appropriate bubble densities in the ultrasound field, the peak temperature changes are simulated. The results for required bubble density are given. Finally, a simple bubbly liquid model is presented to estimate the shielding effects which may be important even for low void fraction during high intensity focused ultrasound (HIFU) treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A complete understanding of high-intensity focused ultrasound-induced temperature changes in tissue requires insight into all potential mechanisms for heat deposition. Applications of therapeutic ultrasound often utilize acoustic pressures capable of producing cavitation activity. Recognizing the ability of bubbles to transfer acoustic energy into heat generation, a study of the role bubbles play in tissue hyperthermia becomes necessary. These bubbles are typically less than 50μm. This dissertation examines the contribution of bubbles and their motion to an enhanced heating effect observed in a tissue-mimicking phantom. A series of experiments established a relationship between bubble activity and an enhanced temperature rise in the phantom by simultaneously measuring both the temperature change and acoustic emissions from bubbles. It was found that a strong correlation exists between the onset of the enhanced heating effect and observable cavitation activity. In addition, the likelihood of observing the enhanced heating effect was largely unaffected by the insonation duration for all but the shortest of insonation times, 0.1 seconds. Numerical simulations were used investigate the relative importance of two candidate mechanisms for heat deposition from bubbles as a means to quantify the number of bubbles required to produce the enhanced temperature rise. The energy deposition from viscous dissipation and the absorption of radiated sound from bubbles were considered as a function of the bubble size and the viscosity of the surrounding medium. Although both mechanisms were capable of producing the level of energy required for the enhanced heating effect, it was found that inertial cavitation, associated with high acoustic radiation and low viscous dissipation, coincided with the the nature of the cavitation best detected by the experimental system. The number of bubbles required to account for the enhanced heating effect was determined through the numerical study to be on the order of 150 or less.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acousto-optic imaging (AOI) in optically diffuse media is a hybrid imaging modality in which a focused ultrasound beam is used to locally phase modulate light inside of turbid media. The modulated optical field carries with it information about the optical properties in the region where the light and sound interact. The motivation for the development of AOI systems is to measure optical properties at large depths within biological tissue with high spatial resolution. A photorefractive crystal (PRC) based interferometry system is developed for the detection of phase modulated light in AOI applications. Two-wave mixing in the PRC creates a reference beam that is wavefront matched to the modulated optical field collected from the specimen. The phase modulation is converted to an intensity modulation at the optical detector when these two fields interfere. The interferometer has a high optical etendue, making it well suited for AOI where the scattered light levels are typically low. A theoretical model for the detection of acoustically induced phase modulation in turbid media using PRC based interferometry is detailed. An AOI system, using a single element focused ultrasound transducer to pump the AO interaction and the PRC based detection system, is fabricated and tested on tissue mimicking phantoms. It is found that the system has sufficient sensitivity to detect broadband AO signals generated using pulsed ultrasound, allowing for AOI at low time averaged ultrasound output levels. The spatial resolution of the AO imaging system is studied as a function of the ultrasound pulse parameters. A theoretical model of light propagation in turbid media is used to explore the dependence of the AO response on the experimental geometry, light collection aperture, and target optical properties. Finally, a multimodal imaging system combining pulsed AOI and conventional B- mode ultrasound imaging is developed. B-mode ultrasound and AO images of targets embedded in both highly diffuse phantoms and biological tissue ex vivo are obtained, and millimeter resolution is demonstrated in three dimensions. The AO images are intrinsically co-registered with the B-mode ultrasound images. The results suggest that AOI can be used to supplement conventional B-mode ultrasound imaging with optical information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unstable arterial plaque is likely the key component of atherosclerosis, a disease which is responsible for two-thirds of heart attacks and strokes, leading to approximately 1 million deaths in the United States. Ultrasound imaging is able to detect plaque but as of yet is not able to distinguish unstable plaque from stable plaque. In this work a scanning acoustic microscope (SAM) was implemented and validated as tool to measure the acoustic properties of a sample. The goal for the SAM is to be able to provide quantitative measurements of the acoustic properties of different plaque types, to understand the physical basis by which plaque may be identified acoustically. The SAM consists of a spherically focused transducer which operates in pulse-echo mode and is scanned in a 2D raster pattern over a sample. A plane wave analysis is presented which allows the impedance, attenuation and phase velocity of a sample to be de- termined from measurements of the echoes from the front and back of the sample. After the measurements, the attenuation and phase velocity were analysed to ensure that they were consistent with causality. The backscatter coefficient of the samples was obtained using the technique outlined by Chen et al [8]. The transducer used here was able to determine acoustic properties from 10-40 MHz. The results for the impedance, attenuation and phase velocity were validated for high and low-density polyethylene against published results. The plane wave approximation was validated by measuring the properties throughout the focal region and throughout a range of incidence angles from the transducer. The SAM was used to characterize a set of recipes for tissue-mimicking phantoms which demonstrate indepen- dent control over the impedance, attenuation, phase velocity and backscatter coefficient. An initial feasibility study on a human artery was performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neoplastic tissue is typically highly vascularized, contains abnormal concentrations of extracellular proteins (e.g. collagen, proteoglycans) and has a high interstitial fluid pres- sure compared to most normal tissues. These changes result in an overall stiffening typical of most solid tumors. Elasticity Imaging (EI) is a technique which uses imaging systems to measure relative tissue deformation and thus noninvasively infer its mechanical stiffness. Stiffness is recovered from measured deformation by using an appropriate mathematical model and solving an inverse problem. The integration of EI with existing imaging modal- ities can improve their diagnostic and research capabilities. The aim of this work is to develop and evaluate techniques to image and quantify the mechanical properties of soft tissues in three dimensions (3D). To that end, this thesis presents and validates a method by which three dimensional ultrasound images can be used to image and quantify the shear modulus distribution of tissue mimicking phantoms. This work is presented to motivate and justify the use of this elasticity imaging technique in a clinical breast cancer screening study. The imaging methodologies discussed are intended to improve the specificity of mammography practices in general. During the development of these techniques, several issues concerning the accuracy and uniqueness of the result were elucidated. Two new algorithms for 3D EI are designed and characterized in this thesis. The first provides three dimensional motion estimates from ultrasound images of the deforming ma- terial. The novel features include finite element interpolation of the displacement field, inclusion of prior information and the ability to enforce physical constraints. The roles of regularization, mesh resolution and an incompressibility constraint on the accuracy of the measured deformation is quantified. The estimated signal to noise ratio of the measured displacement fields are approximately 1800, 21 and 41 for the axial, lateral and eleva- tional components, respectively. The second algorithm recovers the shear elastic modulus distribution of the deforming material by efficiently solving the three dimensional inverse problem as an optimization problem. This method utilizes finite element interpolations, the adjoint method to evaluate the gradient and a quasi-Newton BFGS method for optimiza- tion. Its novel features include the use of the adjoint method and TVD regularization with piece-wise constant interpolation. A source of non-uniqueness in this inverse problem is identified theoretically, demonstrated computationally, explained physically and overcome practically. Both algorithms were test on ultrasound data of independently characterized tissue mimicking phantoms. The recovered elastic modulus was in all cases within 35% of the reference elastic contrast. Finally, the preliminary application of these techniques to tomosynthesis images showed the feasiblity of imaging an elastic inclusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores the problem of protecting a site on the Internet against hostile external Java applets while allowing trusted internal applets to run. With careful implementation, a site can be made resistant to current Java security weaknesses as well as those yet to be discovered. In addition, we describe a new attack on certain sophisticated firewalls that is most effectively realized as a Java applet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of building robust fuzzy extractors, which allow two parties holding similar random variables W, W' to agree on a secret key R in the presence of an active adversary. Robust fuzzy extractors were defined by Dodis et al. in Crypto 2006 [6] to be noninteractive, i.e., only one message P, which can be modified by an unbounded adversary, can pass from one party to the other. This allows them to be used by a single party at different points in time (e.g., for key recovery or biometric authentication), but also presents an additional challenge: what if R is used, and thus possibly observed by the adversary, before the adversary has a chance to modify P. Fuzzy extractors secure against such a strong attack are called post-application robust. We construct a fuzzy extractor with post-application robustness that extracts a shared secret key of up to (2m−n)/2 bits (depending on error-tolerance and security parameters), where n is the bit-length and m is the entropy of W . The previously best known result, also of Dodis et al., [6] extracted up to (2m − n)/3 bits (depending on the same parameters).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a thorough characterization of the access patterns in blogspace, which comprises a rich interconnected web of blog postings and comments by an increasingly prominent user community that collectively define what has become known as the blogosphere. Our characterization of over 35 million read, write, and management requests spanning a 28-day period is done at three different levels. The user view characterizes how individual users interact with blogosphere objects (blogs); the object view characterizes how individual blogs are accessed; the server view characterizes the aggregate access patterns of all users to all blogs. The more-interactive nature of the blogosphere leads to interesting traffic and communication patterns, which are different from those observed for traditional web content. We identify and characterize novel features of the blogosphere workload, and we show the similarities and differences between typical web server workloads and blogosphere server workloads. Finally, based on our main characterization results, we build a new synthetic blogosphere workload generator called GBLOT, which aims at mimicking closely a stream of requests originating from a population of blog users. Given the increasing share of blogspace traffic, realistic workload models and tools are important for capacity planning and traffic engineering purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an economic mechanism to reduce the incidence of malware that delivers spam. Earlier research proposed attention markets as a solution for unwanted messages, and showed they could provide more net benefit than alternatives such as filtering and taxes. Because it uses a currency system, Attention Bonds faces a challenge. Zombies, botnets, and various forms of malware might steal valuable currency instead of stealing unused CPU cycles. We resolve this problem by taking advantage of the fact that the spam-bot problem has been reduced to financial fraud. As such, the large body of existing work in that realm can be brought to bear. By drawing an analogy between sending and spending, we show how a market mechanism can detect and prevent spam malware. We prove that by using a currency (i) each instance of spam increases the probability of detecting infections, and (ii) the value of eradicating infections can justify insuring users against fraud. This approach attacks spam at the source, a virtue missing from filters that attack spam at the destination. Additionally, the exchange of currency provides signals of interest that can improve the targeting of ads. ISPs benefit from data management services and consumers benefit from the higher average value of messages they receive. We explore these and other secondary effects of attention markets, and find them to offer, on the whole, attractive economic benefits for all – including consumers, advertisers, and the ISPs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research have exposed new breeds of attacks that are capable of denying service or inflicting significant damage to TCP flows, without sustaining the attack traffic. Such attacks are often referred to as "low-rate" attacks and they stand in sharp contrast against traditional Denial of Service (DoS) attacks that can completely shut off TCP flows by flooding an Internet link. In this paper, we study the impact of these new breeds of attacks and the extent to which defense mechanisms are capable of mitigating the attack's impact. Through adopting a simple discrete-time model with a single TCP flow and a nonoblivious adversary, we were able to expose new variants of these low-rate attacks that could potentially have high attack potency per attack burst. Our analysis is focused towards worst-case scenarios, thus our results should be regarded as upper bounds on the impact of low-rate attacks rather than a real assessment under a specific attack scenario.