352 resultados para National Science Foundation (U.S.). Directorate for Geosciences.
Resumo:
Stabilized micron-sized bubbles, known as contrast agents, are often injected into the body to enhance ultrasound imaging of blood flow. The ability to detect such bubbles in blood depends on the relative magnitude of the acoustic power backscattered from the microbubbles (‘signal’) to the power backscattered from the red blood cells (‘noise’). Erythrocytes are acoustically small (Rayleigh regime), weak scatterers, and therefore the backscatter coefficient (BSC) of blood increases as the fourth power of frequency throughout the diagnostic frequency range. Microbubbles, on the other hand, are either resonant or super-resonant in the range 5-30 MHz. Above resonance, their total scattering cross-section remains constant with increasing frequency. In the present thesis, a theoretical model of the BSC of a suspension of red blood cells is presented and compared to the BSC of Optison® contrast agent microbubbles. It is predicted that, as the frequency increases, the BSC of red blood cell suspensions eventually exceeds the BSC of the strong scattering microbubbles, leading to a dramatic reduction in signal-to-noise ratio (SNR). This decrease in SNR with increasing frequency was also confirmed experimentally by use of an active cavitation detector for different concentrations of Optison® microbubbles in erythrocyte suspensions of different hematocrits. The magnitude of the observed decrease in SNR correlated well with theoretical predictions in most cases, except for very dense suspensions of red blood cells, where it is hypothesized that the close proximity of erythrocytes inhibits the acoustic response of the microbubbles.
Resumo:
Unstable arterial plaque is likely the key component of atherosclerosis, a disease which is responsible for two-thirds of heart attacks and strokes, leading to approximately 1 million deaths in the United States. Ultrasound imaging is able to detect plaque but as of yet is not able to distinguish unstable plaque from stable plaque. In this work a scanning acoustic microscope (SAM) was implemented and validated as tool to measure the acoustic properties of a sample. The goal for the SAM is to be able to provide quantitative measurements of the acoustic properties of different plaque types, to understand the physical basis by which plaque may be identified acoustically. The SAM consists of a spherically focused transducer which operates in pulse-echo mode and is scanned in a 2D raster pattern over a sample. A plane wave analysis is presented which allows the impedance, attenuation and phase velocity of a sample to be de- termined from measurements of the echoes from the front and back of the sample. After the measurements, the attenuation and phase velocity were analysed to ensure that they were consistent with causality. The backscatter coefficient of the samples was obtained using the technique outlined by Chen et al [8]. The transducer used here was able to determine acoustic properties from 10-40 MHz. The results for the impedance, attenuation and phase velocity were validated for high and low-density polyethylene against published results. The plane wave approximation was validated by measuring the properties throughout the focal region and throughout a range of incidence angles from the transducer. The SAM was used to characterize a set of recipes for tissue-mimicking phantoms which demonstrate indepen- dent control over the impedance, attenuation, phase velocity and backscatter coefficient. An initial feasibility study on a human artery was performed.
Resumo:
We present results of calculations [1] that employ a new mixed quantum classical iterative density matrix propagation approach (ILDM , or so called Is‐Landmap) [2] to explore the survival of coherence in different photo synthetic models. Our model studies confirm the long lived quantum coherence , while conventional theoretical tools (such as Redfield equation) fail to describe these phenomenon [3,4]. Our ILDM method is a numerical exactly propagation scheme and can be served as a bench mark calculation tools[2]. Result get from ILDM and from other recent methods have been compared and show agreement with each other[4,5]. Long lived coherence plateau has been attribute to the shift of harmonic potential due to the system bath interaction, and the harvesting efficiency is a balance between the coherence and dissipation[1]. We use this approach to investigate the excitation energy transfer dynamics in various light harvesting complex include Fenna‐Matthews‐Olsen light harvesting complex[1] and Cryptophyte Phycocyanin 645 [6]. [1] P.Huo and D.F.Coker ,J. Chem. Phys. 133, 184108 (2010) . [2] E.R. Dunkel, S. Bonella, and D.F. Coker, J. Chem. Phys. 129, 114106 (2008). [3] A. Ishizaki and G.R. Fleming, J. Chem. Phys. 130, 234111 (2009). [4] A. Ishizaki and G.R. Fleming, Proc. Natl. Acad. Sci. 106, 17255 (2009). [5] G. Tao and W.H. Miller, J. Phys. Chem. Lett. 1, 891 (2010). [6] P.Huo and D.F.Coker in preparation
Resumo:
Handshape is a key articulatory parameter in sign language, and thus handshape recognition from signing video is essential for sign recognition and retrieval. Handshape transitions within monomorphemic lexical signs (the largest class of signs in signed languages) are governed by phonological rules. For example, such transitions normally involve either closing or opening of the hand (i.e., to exclusively use either folding or unfolding of the palm and one or more fingers). Furthermore, akin to allophonic variations in spoken languages, both inter- and intra- signer variations in the production of specific handshapes are observed. We propose a Bayesian network formulation to exploit handshape co-occurrence constraints, also utilizing information about allophonic variations to aid in handshape recognition. We propose a fast non-rigid image alignment method to gain improved robustness to handshape appearance variations during computation of observation likelihoods in the Bayesian network. We evaluate our handshape recognition approach on a large dataset of monomorphemic lexical signs. We demonstrate that leveraging linguistic constraints on handshapes results in improved handshape recognition accuracy. As part of the overall project, we are collecting and preparing for dissemination a large corpus (three thousand signs from three native signers) of American Sign Language (ASL) video. The video have been annotated using SignStream® [Neidle et al.] with labels for linguistic information such as glosses, morphological properties and variations, and start/end handshapes associated with each ASL sign.
Resumo:
Poster is based on the following paper: C. Kwan and M. Betke. Camera Canvas: Image editing software for people with disabilities. In Proceedings of the 14th International Conference on Human Computer Interaction (HCI International 2011), Orlando, Florida, July 2011.
Resumo:
Numerous problems exist that can be modeled as traffic through a network in which constraints exist to regulate flow. Vehicular road travel, computer networks, and cloud based resource distribution, among others all have natural representations in this manner. As these networks grow in size and/or complexity, analysis and certification of the safety invariants becomes increasingly costly. The NetSketch formalism introduces a lightweight verification framework that allows for greater scalability than traditional analysis methods. The NetSketch tool was developed to provide the power of this formalism in an easy to use and intuitive user interface.
Resumo:
Neoplastic tissue is typically highly vascularized, contains abnormal concentrations of extracellular proteins (e.g. collagen, proteoglycans) and has a high interstitial fluid pres- sure compared to most normal tissues. These changes result in an overall stiffening typical of most solid tumors. Elasticity Imaging (EI) is a technique which uses imaging systems to measure relative tissue deformation and thus noninvasively infer its mechanical stiffness. Stiffness is recovered from measured deformation by using an appropriate mathematical model and solving an inverse problem. The integration of EI with existing imaging modal- ities can improve their diagnostic and research capabilities. The aim of this work is to develop and evaluate techniques to image and quantify the mechanical properties of soft tissues in three dimensions (3D). To that end, this thesis presents and validates a method by which three dimensional ultrasound images can be used to image and quantify the shear modulus distribution of tissue mimicking phantoms. This work is presented to motivate and justify the use of this elasticity imaging technique in a clinical breast cancer screening study. The imaging methodologies discussed are intended to improve the specificity of mammography practices in general. During the development of these techniques, several issues concerning the accuracy and uniqueness of the result were elucidated. Two new algorithms for 3D EI are designed and characterized in this thesis. The first provides three dimensional motion estimates from ultrasound images of the deforming ma- terial. The novel features include finite element interpolation of the displacement field, inclusion of prior information and the ability to enforce physical constraints. The roles of regularization, mesh resolution and an incompressibility constraint on the accuracy of the measured deformation is quantified. The estimated signal to noise ratio of the measured displacement fields are approximately 1800, 21 and 41 for the axial, lateral and eleva- tional components, respectively. The second algorithm recovers the shear elastic modulus distribution of the deforming material by efficiently solving the three dimensional inverse problem as an optimization problem. This method utilizes finite element interpolations, the adjoint method to evaluate the gradient and a quasi-Newton BFGS method for optimiza- tion. Its novel features include the use of the adjoint method and TVD regularization with piece-wise constant interpolation. A source of non-uniqueness in this inverse problem is identified theoretically, demonstrated computationally, explained physically and overcome practically. Both algorithms were test on ultrasound data of independently characterized tissue mimicking phantoms. The recovered elastic modulus was in all cases within 35% of the reference elastic contrast. Finally, the preliminary application of these techniques to tomosynthesis images showed the feasiblity of imaging an elastic inclusion.
Resumo:
Swiss National Science Foundation; Austrian Federal Ministry of Science and Research; Deutsche Forschungsgemeinschaft (SFB 314); Christ Church, Oxford; Oxford University Computing Laboratory
Resumo:
We investigate the problem of learning disjunctions of counting functions, which are general cases of parity and modulo functions, with equivalence and membership queries. We prove that, for any prime number p, the class of disjunctions of integer-weighted counting functions with modulus p over the domain Znq (or Zn) for any given integer q ≥ 2 is polynomial time learnable using at most n + 1 equivalence queries, where the hypotheses issued by the learner are disjunctions of at most n counting functions with weights from Zp. The result is obtained through learning linear systems over an arbitrary field. In general a counting function may have a composite modulus. We prove that, for any given integer q ≥ 2, over the domain Zn2, the class of read-once disjunctions of Boolean-weighted counting functions with modulus q is polynomial time learnable with only one equivalence query, and the class of disjunctions of log log n Boolean-weighted counting functions with modulus q is polynomial time learnable. Finally, we present an algorithm for learning graph-based counting functions.
Resumo:
For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.
Resumo:
The ML programming language restricts type polymorphism to occur only in the "let-in" construct and requires every occurrence of a formal parameter of a function (a lambda abstraction) to have the same type. Milner in 1978 refers to this restriction (which was adopted to help ML achieve automatic type inference) as a serious limitation. We show that this restriction can be relaxed enough to allow universal polymorphic abstraction without losing automatic type inference. This extension is equivalent to the rank-2 fragment of system F. We precisely characterize the additional program phrases (lambda terms) that can be typed with this extension and we describe typing anomalies both before and after the extension. We discuss how macros may be used to gain some of the power of rank-3 types without losing automatic type inference. We also discuss user-interface problems in how to inform the programmer of the possible types a program phrase may have.
Resumo:
We prove that first order logic is strictly weaker than fixed point logic over every infinite classes of finite ordered structures with unary relations: Over these classes there is always an inductive unary relation which cannot be defined by a first-order formula, even when every inductive sentence (i.e., closed formula) can be expressed in first-order over this particular class. Our proof first establishes a property valid for every unary relation definable by first-order logic over these classes which is peculiar to classes of ordered structures with unary relations. In a second step we show that this property itself can be expressed in fixed point logic and can be used to construct a non-elementary unary relation.
Resumo:
We give an explicit and easy-to-verify characterization for subsets in finite total orders (infinitely many of them in general) to be uniformly definable by a first-order formula. From this characterization we derive immediately that Beth's definability theorem does not hold in any class of finite total orders, as well as that McColm's first conjecture is true for all classes of finite total orders. Another consequence is a natural 0-1 law for definable subsets on finite total orders expressed as a statement about the possible densities of first-order definable subsets.
Resumo:
This paper explores reasons for the high degree of variability in the sizes of ASes that have recently been observed, and the processes by which this variable distribution develops. AS size distribution is important for a number of reasons. First, when modeling network topologies, an AS size distribution assists in labeling routers with an associated AS. Second, AS size has been found to be positively correlated with the degree of the AS (number of peering links), so understanding the distribution of AS sizes has implications for AS connectivity properties. Our model accounts for AS births, growth, and mergers. We analyze two models: one incorporates only the growth of hosts and ASes, and a second extends that model to include mergers of ASes. We show analytically that, given reasonable assumptions about the nature of mergers, the resulting size distribution exhibits a power law tail with the exponent independent of the details of the merging process. We estimate parameters of the models from measurements obtained from Internet registries and from BGP tables. We then compare the models solutions to empirical AS size distribution taken from Mercator and Skitter datasets, and find that the simple growth-based model yields general agreement with empirical data. Our analysis of the model in which mergers occur in a manner independent of the size of the merging ASes suggests that more detailed analysis of merger processes is needed.
Resumo:
A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.