6 resultados para Noninvasive Technique
em CaltechTHESIS
Resumo:
In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.
Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.
Resumo:
An analytic technique is developed that couples to finite difference calculations to extend the results to arbitrary distance. Finite differences and the analytic result, a boundary integral called two-dimensional Kirchhoff, are applied to simple models and three seismological problems dealing with data. The simple models include a thorough investigation of the seismologic effects of a deep continental basin. The first problem is explosions at Yucca Flat, in the Nevada test site. By modeling both near-field strong-motion records and teleseismic P-waves simultaneously, it is shown that scattered surface waves are responsible for teleseismic complexity. The second problem deals with explosions at Amchitka Island, Alaska. The near-field seismograms are investigated using a variety of complex structures and sources. The third problem involves regional seismograms of Imperial Valley, California earthquakes recorded at Pasadena, California. The data are shown to contain evidence of deterministic structure, but lack of more direct measurements of the structure and possible three-dimensional effects make two-dimensional modeling of these data difficult.
Resumo:
The nuclear resonant reaction 19F(ρ,αγ)16O has been used to perform depth-sensitive analyses of fluorine in lunar samples and carbonaceous chondrites. The resonance at 0.83 MeV (center-of-mass) in this reaction is utilized to study fluorine surface films, with particular interest paid to the outer micron of Apollo 15 green glass, Apollo 17 orange glass, and lunar vesicular basalts. These results are distinguished from terrestrial contamination, and are discussed in terms of a volcanic origin for the samples of interest. Measurements of fluorine in carbonaceous chondrites are used to better define the solar system fluorine abundance. A technique for measurement of carbon on solid surfaces with applications to direct quantitative analysis of implanted solar wind carbon in lunar samples is described.
Resumo:
The resonant nuclear reaction 19F(p,αy)16O has been used to perform depth-sensitive analyses for both fluorine and hydrogen in solid samples. The resonance at 0.83 MeV (center-of-mass) in this reaction has been applied to the measurement of the distribution of trapped solar protons in lunar samples to depths of ~1/2µm. These results are interpreted in terms of a redistribution of the implanted H which has been influenced by heavy radiation damage in the surface region. Fluorine determinations have been performed in a 1-µm surface layer on lunar and meteoritic samples using the same 19F(p,αy)16O resonance. The measurement of H depth distributions has also been used to study the hydration of terrestrial obsidian, a phenomenon of considerable archaeological interest as a means of dating obsidian artifacts. Additional applications of this type of technique are also discussed.
Resumo:
A technique is developed for the design of lenses for transitioning TEM waves between conical and/or cylindrical transmission lines, ideally with no reflection or distortion of the waves. These lenses utilize isotropic but inhomogeneous media and are based on a solution of Maxwell's equations instead of just geometrical optics. The technique employs the expression of the constitutive parameters, ɛ and μ, plus Maxwell's equations, in a general orthogonal curvilinear coordinate system in tensor form, giving what we term as formal quantities. Solving the problem for certain types of formal constitutive parameters, these are transformed to give ɛ and μ as functions of position. Several examples of such lenses are considered in detail.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.