13 resultados para easy

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two of the most important questions in mantle dynamics are investigated in three separate studies: the influence of phase transitions (studies 1 and 2), and the influence of temperature-dependent viscosity (study 3).

(1) Numerical modeling of mantle convection in a three-dimensional spherical shell incorporating the two major mantle phase transitions reveals an inherently three-dimensional flow pattern characterized by accumulation of cold downwellings above the 670 km discontinuity, and cylindrical 'avalanches' of upper mantle material into the lower mantle. The exothermic phase transition at 400 km depth reduces the degree of layering. A region of strongly-depressed temperature occurs at the base of the mantle. The temperature field is strongly modulated by this partial layering, both locally and in globally-averaged diagnostics. Flow penetration is strongly wavelength-dependent, with easy penetration at long wavelengths but strong inhibition at short wavelengths. The amplitude of the geoid is not significantly affected.

(2) Using a simple criterion for the deflection of an upwelling or downwelling by an endothermic phase transition, the scaling of the critical phase buoyancy parameter with the important lengthscales is obtained. The derived trends match those observed in numerical simulations, i.e., deflection is enhanced by (a) shorter wavelengths, (b) narrower up/downwellings (c) internal heating and (d) narrower phase loops.

(3) A systematic investigation into the effects of temperature-dependent viscosity on mantle convection has been performed in three-dimensional Cartesian geometry, with a factor of 1000-2500 viscosity variation, and Rayleigh numbers of 10^5-10^7. Enormous differences in model behavior are found, depending on the details of rheology, heating mode, compressibility and boundary conditions. Stress-free boundaries, compressibility, and temperature-dependent viscosity all favor long-wavelength flows, even in internally heated cases. However, small cells are obtained with some parameter combinations. Downwelling plumes and upwelling sheets are possible when viscosity is dependent solely on temperature. Viscous dissipation becomes important with temperature-dependent viscosity.

The sensitivity of mantle flow and structure to these various complexities illustrates the importance of performing mantle convection calculations with rheological and thermodynamic properties matching as closely as possible those of the Earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past five years, the cost of solar panels has dropped drastically and, in concert, the number of installed modules has risen exponentially. However, solar electricity is still more than twice as expensive as electricity from a natural gas plant. Fortunately, wire array solar cells have emerged as a promising technology for further lowering the cost of solar.

Si wire array solar cells are formed with a unique, low cost growth method and use 100 times less material than conventional Si cells. The wires can be embedded in a transparent, flexible polymer to create a free-standing array that can be rolled up for easy installation in a variety of form factors. Furthermore, by incorporating multijunctions into the wire morphology, higher efficiencies can be achieved while taking advantage of the unique defect relaxation pathways afforded by the 3D wire geometry.

The work in this thesis shepherded Si wires from undoped arrays to flexible, functional large area devices and laid the groundwork for multijunction wire array cells. Fabrication techniques were developed to turn intrinsic Si wires into full p-n junctions and the wires were passivated with a-Si:H and a-SiNx:H. Single wire devices yielded open circuit voltages of 600 mV and efficiencies of 9%. The arrays were then embedded in a polymer and contacted with a transparent, flexible, Ni nanoparticle and Ag nanowire top contact. The contact connected >99% of the wires in parallel and yielded flexible, substrate free solar cells featuring hundreds of thousands of wires.

Building on the success of the Si wire arrays, GaP was epitaxially grown on the material to create heterostructures for photoelectrochemistry. These cells were limited by low absorption in the GaP due to its indirect bandgap, and poor current collection due to a diffusion length of only 80 nm. However, GaAsP on SiGe offers a superior combination of materials, and wire architectures based on these semiconductors were investigated for multijunction arrays. These devices offer potential efficiencies of 34%, as demonstrated through an analytical model and optoelectronic simulations. SiGe and Ge wires were fabricated via chemical-vapor deposition and reactive ion etching. GaAs was then grown on these substrates at the National Renewable Energy Lab and yielded ns lifetime components, as required for achieving high efficiency devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optical frequency combs (OFCs) provide direct phase-coherent link between optical and RF frequencies, and enable precision measurement of optical frequencies. In recent years, a new class of frequency combs (microcombs) have emerged based on parametric frequency conversions in dielectric microresonators. Micocombs have large line spacing from 10's to 100's GHz, allowing easy access to individual comb lines for arbitrary waveform synthesis. They also provide broadband parametric gain bandwidth, not limited by specific atomic or molecular transitions in conventional OFCs. The emerging applications of microcombs include low noise microwave generation, astronomical spectrograph calibration, direct comb spectroscopy, and high capacity telecommunications.

In this thesis, research is presented starting with the introduction of a new type of chemically etched, planar silica-on-silicon disk resonator. A record Q factor of 875 million is achieved for on-chip devices. A simple and accurate approach to characterize the FSR and dispersion of microcavities is demonstrated. Microresonator-based frequency combs (microcombs) are demonstrated with microwave repetition rate less than 80 GHz on a chip for the first time. Overall low threshold power (as low as 1 mW) of microcombs across a wide range of resonator FSRs from 2.6 to 220 GHz in surface-loss-limited disk resonators is demonstrated. The rich and complex dynamics of microcomb RF noise are studied. High-coherence, RF phase-locking of microcombs is demonstrated where injection locking of the subcomb offset frequencies are observed by pump-detuning-alignment. Moreover, temporal mode locking, featuring subpicosecond pulses from a parametric 22 GHz microcomb, is observed. We further demonstrated a shot-noise-limited white phase noise of microcomb for the first time. Finally, stabilization of the microcomb repetition rate is realized by phase lock loop control.

For another major nonlinear optical application of disk resonators, highly coherent, simulated Brillouin lasers (SBL) on silicon are also demonstrated, with record low Schawlow-Townes noise less than 0.1 Hz^2/Hz for any chip-based lasers and low technical noise comparable to commercial narrow-linewidth fiber lasers. The SBL devices are efficient, featuring more than 90% quantum efficiency and threshold as low as 60 microwatts. Moreover, novel properties of the SBL are studied, including cascaded operation, threshold tuning, and mode-pulling phenomena. Furthermore, high performance microwave generation using on-chip cascaded Brillouin oscillation is demonstrated. It is also robust enough to enable incorporation as the optical voltage-controlled-oscillator in the first demonstration of a photonic-based, microwave frequency synthesizer. Finally, applications of microresonators as frequency reference cavities and low-phase-noise optomechanical oscillators are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

DNA damage is extremely detrimental to the cell and must be repaired to protect the genome. DNA is capable of conducting charge through the overlapping π-orbitals of stacked bases; this phenomenon is extremely sensitive to the integrity of the π-stack, as perturbations attenuate DNA charge transport (CT). Based on the E. coli base excision repair (BER) proteins EndoIII and MutY, it has recently been proposed that redox-active proteins containing metal clusters can utilize DNA CT to signal one another to locate sites of DNA damage.

To expand our repertoire of proteins that utilize DNA-mediated signaling, we measured the DNA-bound redox potential of the nucleotide excision repair (NER) helicase XPD from Sulfolobus acidocaldarius. A midpoint potential of 82 mV versus NHE was observed, resembling that of the previously reported BER proteins. The redox signal increases in intensity with ATP hydrolysis in only the WT protein and mutants that maintain ATPase activity and not for ATPase-deficient mutants. The signal increase correlates directly with ATP activity, suggesting that DNA-mediated signaling may play a general role in protein signaling. Several mutations in human XPD that lead to XP-related diseases have been identified; using SaXPD, we explored how these mutations, which are conserved in the thermophile, affect protein electrochemistry.

To further understand the electrochemical signaling of XPD, we studied the yeast S. cerevisiae Rad3 protein. ScRad3 mutants were incubated on a DNA-modified electrode and exhibited a similar redox potential to SaXPD. We developed a haploid strain of S. cerevisiae that allowed for easy manipulation of Rad3. In a survival assay, the ATPase- and helicase-deficient mutants show little survival, while the two disease-related mutants exhibit survival similar to WT. When both a WT and G47R (ATPase/helicase deficient) strain were challenged with different DNA damaging agents, both exhibited comparable survival in the presence of hydroxyurea, while with methyl methanesulfonate and camptothecin, the G47R strain exhibits a significant change in growth, suggesting that Rad3 is involved in repairing damage beyond traditional NER substrates. Together, these data expand our understanding of redox-active proteins at the interface of DNA repair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The attitude of the medieval church towards violence before the First Crusade in 1095 underwent a significant institutional evolution, from the peaceful tradition of the New Testament and the Roman persecution, through the prelate-led military campaigns of the Carolingian period and the Peace of God era. It would be superficially easy to characterize this transformation as the pragmatic and entirely secular response of a growing power to the changing world. However, such a simplification does not fully do justice to the underlying theology. While church leaders from the 5th Century to the 11th had vastly different motivations and circumstances under which to develop their responses to a variety of violent activities, the teachings of Augustine of Hippo provided a unifying theme. Augustine’s just war theology, in establishing which conflicts are acceptable in the eyes of God, focused on determining whether a proper causa belli or basis for war exists, and then whether a legitimate authority declares and leads the war. Augustine masterfully integrated aspects of the Old and New Testaments to create a lasting and compelling case for his definition of justified violence. Although at different times and places his theology has been used to support a variety of different attitudes, the profound influence of his work on the medieval church’s evolving position on violence is clear.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I. Foehn winds of southern California.
An investigation of the hot, dry and dust laden winds occurring in the late fall and early winter in the Los Angeles Basin and attributed in the past to the influences of the desert regions to the north revealed that these currents were of a foehn nature. Their properties were found to be entirely due to dynamical heating produced in the descent from the high level areas in the interior to the lower Los Angeles Basin. Any dust associated with the phenomenon was found to be acquired from the Los Angeles area rather than transported from the desert. It was found that the frequency of occurrence of a mild type foehn of this nature during this season was sufficient to warrant its classification as a winter monsoon. This results from the topography of the Los Angeles region which allows an easy entrance to the air from the interior by virtue of the low level mountain passes north of the area. This monsoon provides the mild winter climate of southern California since temperatures associated with the foehn currents are far higher than those experienced when maritime air from the adjacent Pacific Ocean occupies the region.

II. Foehn wind cyclo-genesis.
Intense anticyclones frequently build up over the high level regions of the Great Basin and Columbia Plateau which lie between the Sierra Nevada and Cascade Mountains to the west and the Rocky Mountains to the east. The outflow from these anticyclones produce extensive foehns east of the Rockies in the comparatively low level areas of the middle west and the Canadian provinces of Alberta and Saskatchewan. Normally at this season of the year very cold polar continental air masses are present over this territory and with the occurrence of these foehns marked discontinuity surfaces arise between the warm foehn current, which is obliged to slide over a colder mass, and the Pc air to the east. Cyclones are easily produced from this phenomenon and take the form of unstable waves which propagate along the discontinuity surface between the two dissimilar masses. A continual series of such cyclones was found to occur as long as the Great Basin anticyclone is maintained with undiminished intensity.

III. Weather conditions associated with the Akron disaster.
This situation illustrates the speedy development and propagation of young disturbances in the eastern United States during the spring of the year under the influence of the conditionally unstable tropical maritime air masses which characterise the region. It also furnishes an excellent example of the superiority of air mass and frontal methods of weather prediction for aircraft operation over the older methods based upon pressure distribution.

IV. The Los Angeles storm of December 30, 1933 to January 1, 1934.
This discussion points out some of the fundamental interactions occurring between air masses of the North Pacific Ocean in connection with Pacific Coast storms and the value of topographic and aerological considerations in predicting them. Estimates of rainfall intensity and duration from analyses of this type may be made and would prove very valuable in the Los Angeles area in connection with flood control problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the universal lack of donor tissue, there has been emerging interest in engineering materials to stimulate living cells to restore the features and functions of injured organs. We are particularly interested in developing materials for corneal use, where the necessity to maintain the tissue’s transparency presents an additional challenge. Every year, there are 1.5 – 2 million new cases of monocular blindness due to irregular healing of corneal injuries, dwarfing the approximately 150,000 corneal transplants performed. The large gap between the need and availability of cornea transplantation motivates us to develop a wound-healing scaffold that can prevent corneal blindness.

To develop such a scaffold, it is necessary to regulate the cells responsible for repairing the damaged cornea, namely myofibroblasts, which are responsible for the disordered and non-refractive index matched scar that leads to corneal blindness. Using in vitro assays, we identified that protein nanofibers of certain orientation can promote cell migration and modulate the myofibroblast phenotype. The nanofibers are also transparent, easy to handle and non-cytotoxic. To adhere the nanofibers to a wound bed, we examined the use of two different in situ forming hydrogels: an artificial extracellular matrix protein (aECM)-based gel and a photo-crosslinkable heparin-based gel. Both hydrogels can be formed within minutes, are transparent upon gelation and are easily tunable.

Using an in vivo mouse model for epithelial defects, we show that our corneal scaffolds (nanofibers together with hydrogel) are well-tolerated (no inflammatory response or turbidity) and support epithelium regrowth. We developed an ex vivo corneal tissue culture model where corneas that are wounded and treated with our scaffold can be cultured while retaining their ability to repair wounds for up to 21 days. Using this technique, we found that the aECM-based treatment induced a more favorable wound response than the heparin-based treatment, prompting us to further examine the efficacy of the aECM-based treatment in vivo using a rabbit model for stromal wounds. Results show that treated corneas have fewer myofibroblasts and immune cells than untreated ones, indicating that our corneal scaffold shows promise in promoting a calmer wound response and preventing corneal haze formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diketopiperazine (DKP) motif is found in a wide range of biologically active natural products. This work details our efforts toward two classes of DKP-containing natural products.

Class one features the pyrroloindoline structure, derived from tryptophans. Our group developed a highly enantioselective (3 + 2) formal cycloaddition between indoles and acrylates to provide pyrroloindoline products possessing three stereocenters. Utilizing this methodology, we accomplished asymmetric total synthesis of three natural products: (–)-lansai B, (+)-nocardioazines A and B. Total synthesis of (–)-lansai B was realized in six steps, and featured an amino acid dimerization strategy. The total synthesis of (+)-nocardioazine B was also successfully completed in ten steps. Challenges were met in approaching (+)-nocardioazine A, where a seemingly easy last-step epoxidization did not prove successful. After re-examining our synthetic strategy, an early-stage epoxidation strategy was pursued, which eventually yielded a nine-step total synthesis of (+)-nocardioazine A.

Class two is the epidithiodiketopiperazine (ETP) natural products, which possesses an additional episulfide bridge in the DKP core. With the goal of accessing ETPs with different peripheral structures for structure-activity relationship studies, a highly divergent route was successfully developed, which was showcased in the formal synthesis of (–)-emethallicin E and (–)-haematocin, and the first asymmetric synthesis of (–)-acetylapoaranotin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomolecular circuit engineering is critical for implementing complex functions in vivo, and is a baseline method in the synthetic biology space. However, current methods for conducting biomolecular circuit engineering are time-consuming and tedious. A complete design-build-test cycle typically takes weeks' to months' time due to the lack of an intermediary between design ex vivo and testing in vivo. In this work, we explore the development and application of a "biomolecular breadboard" composed of an in-vitro transcription-translation (TX-TL) lysate to rapidly speed up the engineering design-build-test cycle. We first developed protocols for creating and using lysates for conducting biological circuit design. By doing so we simplified the existing technology to an affordable ($0.03/uL) and easy to use three-tube reagent system. We then developed tools to accelerate circuit design by allowing for linear DNA use in lieu of plasmid DNA, and by utilizing principles of modular assembly. This allowed the design-build-test cycle to be reduced to under a business day. We then characterized protein degradation dynamics in the breadboard to aid to implementing complex circuits. Finally, we demonstrated that the breadboard could be applied to engineer complex synthetic circuits in vitro and in vivo. Specifically, we utilized our understanding of linear DNA prototyping, modular assembly, and protein degradation dynamics to characterize the repressilator oscillator and to prototype novel three- and five-node negative feedback oscillators both in vitro and in vivo. We therefore believe the biomolecular breadboard has wide application for acting as an intermediary for biological circuit engineering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.