5 resultados para ”Learning by doing”
em CaltechTHESIS
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
Some aspects of wave propagation in thin elastic shells are considered. The governing equations are derived by a method which makes their relationship to the exact equations of linear elasticity quite clear. Finite wave propagation speeds are ensured by the inclusion of the appropriate physical effects.
The problem of a constant pressure front moving with constant velocity along a semi-infinite circular cylindrical shell is studied. The behavior of the solution immediately under the leading wave is found, as well as the short time solution behind the characteristic wavefronts. The main long time disturbance is found to travel with the velocity of very long longitudinal waves in a bar and an expression for this part of the solution is given.
When a constant moment is applied to the lip of an open spherical shell, there is an interesting effect due to the focusing of the waves. This phenomenon is studied and an expression is derived for the wavefront behavior for the first passage of the leading wave and its first reflection.
For the two problems mentioned, the method used involves reducing the governing partial differential equations to ordinary differential equations by means of a Laplace transform in time. The information sought is then extracted by doing the appropriate asymptotic expansion with the Laplace variable as parameter.
Resumo:
Detailed pulsed neutron measurements have been performed in graphite assemblies ranging in size from 30.48 cm x 38.10 cm x 38.10 cm to 91.44 cm x 66.67 cm x 66.67 cm. Results of the measurement have been compared to a modeled theoretical computation.
In the first set of experiments, we measured the effective decay constant of the neutron population in ten graphite stacks as a function of time after the source burst. We found the decay to be non-exponential in the six smallest assemblies, while in three larger assemblies the decay was exponential over a significant portion of the total measuring interval. The decay in the largest stack was exponential over the entire ten millisecond measuring interval. The non-exponential decay mode occurred when the effective decay constant exceeded 1600 sec^( -1).
In a second set of experiments, we measured the spatial dependence of the neutron population in four graphite stacks as a function of time after the source pulse. By doing an harmonic analysis of the spatial shape of the neutron distribution, we were able to compute the effective decay constants of the first two spatial modes. In addition, we were able to compute the time dependent effective wave number of neutron distribution in the stacks.
Finally, we used a Laplace transform technique and a simple modeled scattering kernel to solve a diffusion equation for the time and energy dependence of the neutron distribution in the graphite stacks. Comparison of these theoretical results with the results of the first set of experiments indicated that more exact theoretical analysis would be required to adequately describe the experiments.
The implications of our experimental results for the theory of pulsed neutron experiments in polycrystalline media are discussed in the last chapter.
Resumo:
Biomolecular circuit engineering is critical for implementing complex functions in vivo, and is a baseline method in the synthetic biology space. However, current methods for conducting biomolecular circuit engineering are time-consuming and tedious. A complete design-build-test cycle typically takes weeks' to months' time due to the lack of an intermediary between design ex vivo and testing in vivo. In this work, we explore the development and application of a "biomolecular breadboard" composed of an in-vitro transcription-translation (TX-TL) lysate to rapidly speed up the engineering design-build-test cycle. We first developed protocols for creating and using lysates for conducting biological circuit design. By doing so we simplified the existing technology to an affordable ($0.03/uL) and easy to use three-tube reagent system. We then developed tools to accelerate circuit design by allowing for linear DNA use in lieu of plasmid DNA, and by utilizing principles of modular assembly. This allowed the design-build-test cycle to be reduced to under a business day. We then characterized protein degradation dynamics in the breadboard to aid to implementing complex circuits. Finally, we demonstrated that the breadboard could be applied to engineer complex synthetic circuits in vitro and in vivo. Specifically, we utilized our understanding of linear DNA prototyping, modular assembly, and protein degradation dynamics to characterize the repressilator oscillator and to prototype novel three- and five-node negative feedback oscillators both in vitro and in vivo. We therefore believe the biomolecular breadboard has wide application for acting as an intermediary for biological circuit engineering.
Resumo:
Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.
The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.
The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).
The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.
The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.
In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.