10 resultados para full implementation
em CaltechTHESIS
Resumo:
This thesis explores the design, construction, and applications of the optoelectronic swept-frequency laser (SFL). The optoelectronic SFL is a feedback loop designed around a swept-frequency (chirped) semiconductor laser (SCL) to control its instantaneous optical frequency, such that the chirp characteristics are determined solely by a reference electronic oscillator. The resultant system generates precisely controlled optical frequency sweeps. In particular, we focus on linear chirps because of their numerous applications. We demonstrate optoelectronic SFLs based on vertical-cavity surface-emitting lasers (VCSELs) and distributed-feedback lasers (DFBs) at wavelengths of 1550 nm and 1060 nm. We develop an iterative bias current predistortion procedure that enables SFL operation at very high chirp rates, up to 10^16 Hz/sec. We describe commercialization efforts and implementation of the predistortion algorithm in a stand-alone embedded environment, undertaken as part of our collaboration with Telaris, Inc. We demonstrate frequency-modulated continuous-wave (FMCW) ranging and three-dimensional (3-D) imaging using a 1550 nm optoelectronic SFL.
We develop the technique of multiple source FMCW (MS-FMCW) reflectometry, in which the frequency sweeps of multiple SFLs are "stitched" together in order to increase the optical bandwidth, and hence improve the axial resolution, of an FMCW ranging measurement. We demonstrate computer-aided stitching of DFB and VCSEL sweeps at 1550 nm. We also develop and demonstrate hardware stitching, which enables MS-FMCW ranging without additional signal processing. The culmination of this work is the hardware stitching of four VCSELs at 1550 nm for a total optical bandwidth of 2 THz, and a free-space axial resolution of 75 microns.
We describe our work on the tomographic imaging camera (TomICam), a 3-D imaging system based on FMCW ranging that features non-mechanical acquisition of transverse pixels. Our approach uses a combination of electronically tuned optical sources and low-cost full-field detector arrays, completely eliminating the need for moving parts traditionally employed in 3-D imaging. We describe the basic TomICam principle, and demonstrate single-pixel TomICam ranging in a proof-of-concept experiment. We also discuss the application of compressive sensing (CS) to the TomICam platform, and perform a series of numerical simulations. These simulations show that tenfold compression is feasible in CS TomICam, which effectively improves the volume acquisition speed by a factor ten.
We develop chirped-wave phase-locking techniques, and apply them to coherent beam combining (CBC) of chirped-seed amplifiers (CSAs) in a master oscillator power amplifier configuration. The precise chirp linearity of the optoelectronic SFL enables non-mechanical compensation of optical delays using acousto-optic frequency shifters, and its high chirp rate simultaneously increases the stimulated Brillouin scattering (SBS) threshold of the active fiber. We characterize a 1550 nm chirped-seed amplifier coherent-combining system. We use a chirp rate of 5*10^14 Hz/sec to increase the amplifier SBS threshold threefold, when compared to a single-frequency seed. We demonstrate efficient phase-locking and electronic beam steering of two 3 W erbium-doped fiber amplifier channels, achieving temporal phase noise levels corresponding to interferometric fringe visibilities exceeding 98%.
Resumo:
Humans are particularly adept at modifying their behavior in accordance with changing environmental demands. Through various mechanisms of cognitive control, individuals are able to tailor actions to fit complex short- and long-term goals. The research described in this thesis uses functional magnetic resonance imaging to characterize the neural correlates of cognitive control at two levels of complexity: response inhibition and self-control in intertemporal choice. First, we examined changes in neural response associated with increased experience and skill in response inhibition; successful response inhibition was associated with decreased neural response over time in the right ventrolateral prefrontal cortex, a region widely implicated in cognitive control, providing evidence for increased neural efficiency with learned automaticity. We also examined a more abstract form of cognitive control using intertemporal choice. In two experiments, we identified putative neural substrates for individual differences in temporal discounting, or the tendency to prefer immediate to delayed rewards. Using dynamic causal models, we characterized the neural circuit between ventromedial prefrontal cortex, an area involved in valuation, and dorsolateral prefrontal cortex, a region implicated in self-control in intertemporal and dietary choice, and found that connectivity from dorsolateral prefrontal cortex to ventromedial prefrontal cortex increases at the time of choice, particularly when delayed rewards are chosen. Moreover, estimates of the strength of connectivity predicted out-of-sample individual rates of temporal discounting, suggesting a neurocomputational mechanism for variation in the ability to delay gratification. Next, we interrogated the hypothesis that individual differences in temporal discounting are in part explained by the ability to imagine future reward outcomes. Using a novel paradigm, we imaged neural response during the imagining of primary rewards, and identified negative correlations between activity in regions associated the processing of both real and imagined rewards (lateral orbitofrontal cortex and ventromedial prefrontal cortex, respectively) and the individual temporal discounting parameters estimated in the previous experiment. These data suggest that individuals who are better able to represent reward outcomes neurally are less susceptible to temporal discounting. Together, these findings provide further insight into role of the prefrontal cortex in implementing cognitive control, and propose neurobiological substrates for individual variation.
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
Methods that exploit the intrinsic locality of molecular interactions show significant promise in making tractable the electronic structure calculation of large-scale systems. In particular, embedded density functional theory (e-DFT) offers a formally exact approach to electronic structure calculations in which the interactions between subsystems are evaluated in terms of their electronic density. In the following dissertation, methodological advances of embedded density functional theory are described, numerically tested, and applied to real chemical systems.
First, we describe an e-DFT protocol in which the non-additive kinetic energy component of the embedding potential is treated exactly. Then, we present a general implementation of the exact calculation of the non-additive kinetic potential (NAKP) and apply it to molecular systems. We demonstrate that the implementation using the exact NAKP is in excellent agreement with reference Kohn-Sham calculations, whereas the approximate functionals lead to qualitative failures in the calculated energies and equilibrium structures.
Next, we introduce density-embedding techniques to enable the accurate and stable calculation of correlated wavefunction (CW) in complex environments. Embedding potentials calculated using e-DFT introduce the effect of the environment on a subsystem for CW calculations (WFT-in-DFT). We demonstrate that WFT-in-DFT calculations are in good agreement with CW calculations performed on the full complex.
We significantly improve the numerics of the algorithm by enforcing orthogonality between subsystems by introduction of a projection operator. Utilizing the projection-based embedding scheme, we rigorously analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using CWs, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We develop an algorithm which corrects this term and demonstrate the accuracy of this corrected embedding scheme.
Resumo:
This thesis aims at a simple one-parameter macroscopic model of distributed damage and fracture of polymers that is amenable to a straightforward and efficient numerical implementation. The failure model is motivated by post-mortem fractographic observations of void nucleation, growth and coalescence in polyurea stretched to failure, and accounts for the specific fracture energy per unit area attendant to rupture of the material.
Furthermore, it is shown that the macroscopic model can be rigorously derived, in the sense of optimal scaling, from a micromechanical model of chain elasticity and failure regularized by means of fractional strain-gradient elasticity. Optimal scaling laws that supply a link between the single parameter of the macroscopic model, namely the critical energy-release rate of the material, and micromechanical parameters pertaining to the elasticity and strength of the polymer chains, and to the strain-gradient elasticity regularization, are derived. Based on optimal scaling laws, it is shown how the critical energy-release rate of specific materials can be determined from test data. In addition, the scope and fidelity of the model is demonstrated by means of an example of application, namely Taylor-impact experiments of polyurea rods. Hereby, optimal transportation meshfree approximation schemes using maximum-entropy interpolation functions are employed.
Finally, a different crazing model using full derivatives of the deformation gradient and a core cut-off is presented, along with a numerical non-local regularization model. The numerical model takes into account higher-order deformation gradients in a finite element framework. It is shown how the introduction of non-locality into the model stabilizes the effect of strain localization to small volumes in materials undergoing softening. From an investigation of craze formation in the limit of large deformations, convergence studies verifying scaling properties of both local- and non-local energy contributions are presented.
Resumo:
Quantum mechanics places limits on the minimum energy of a harmonic oscillator via the ever-present "zero-point" fluctuations of the quantum ground state. Through squeezing, however, it is possible to decrease the noise of a single motional quadrature below the zero-point level as long as noise is added to the orthogonal quadrature. While squeezing below the quantum noise level was achieved decades ago with light, quantum squeezing of the motion of a mechanical resonator is a more difficult prospect due to the large thermal occupations of megahertz-frequency mechanical devices even at typical dilution refrigerator temperatures of ~ 10 mK.
Kronwald, Marquardt, and Clerk (2013) propose a method of squeezing a single quadrature of mechanical motion below the level of its zero-point fluctuations, even when the mechanics starts out with a large thermal occupation. The scheme operates under the framework of cavity optomechanics, where an optical or microwave cavity is coupled to the mechanics in order to control and read out the mechanical state. In the proposal, two pump tones are applied to the cavity, each detuned from the cavity resonance by the mechanical frequency. The pump tones establish and couple the mechanics to a squeezed reservoir, producing arbitrarily-large, steady-state squeezing of the mechanical motion. In this dissertation, I describe two experiments related to the implementation of this proposal in an electromechanical system. I also expand on the theory presented in Kronwald et. al. to include the effects of squeezing in the presence of classical microwave noise, and without assumptions of perfect alignment of the pump frequencies.
In the first experiment, we produce a squeezed thermal state using the method of Kronwald et. al.. We perform back-action evading measurements of the mechanical squeezed state in order to probe the noise in both quadratures of the mechanics. Using this method, we detect single-quadrature fluctuations at the level of 1.09 +/- 0.06 times the quantum zero-point motion.
In the second experiment, we measure the spectral noise of the microwave cavity in the presence of the squeezing tones and fit a full model to the spectrum in order to deduce a quadrature variance of 0.80 +/- 0.03 times the zero-point level. These measurements provide the first evidence of quantum squeezing of motion in a mechanical resonator.
Resumo:
This thesis outlines the construction of several types of structured integrators for incompressible fluids. We first present a vorticity integrator, which is the Hamiltonian counterpart of the existing Lagrangian-based fluid integrator. We next present a model-reduced variational Eulerian integrator for incompressible fluids, which combines the efficiency gains of dimension reduction, the qualitative robustness to coarse spatial and temporal resolutions of geometric integrators, and the simplicity of homogenized boundary conditions on regular grids to deal with arbitrarily-shaped domains with sub-grid accuracy.
Both these numerical methods involve approximating the Lie group of volume-preserving diffeomorphisms by a finite-dimensional Lie-group and then restricting the resulting variational principle by means of a non-holonomic constraint. Advantages and limitations of this discretization method will be outlined. It will be seen that these derivation techniques are unable to yield symplectic integrators, but that energy conservation is easily obtained, as is a discretized version of Kelvin's circulation theorem.
Finally, we outline the basis of a spectral discrete exterior calculus, which may be a useful element in producing structured numerical methods for fluids in the future.
Resumo:
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Resumo:
Within a wind farm, multiple turbine wakes can interact and have a substantial effect on the overall power production. This makes an understanding of the wake recovery process critically important to optimizing wind farm efficiency. Vertical-axis wind turbines (VAWTs) exhibit features that are amenable to dramatically improving this efficiency. However, the physics of the flow around VAWTs is not well understood, especially as it pertains to wake interactions, and it is the goal of this thesis to partially fill this void. This objective is approached from two broadly different perspectives: a low-order view of wind farm aerodynamics, and a detailed experimental analysis of the VAWT wake.
One of the contributions of this thesis is the development of a semi-empirical model of wind farm aerodynamics, known as the LRB model, that is able to predict turbine array configurations to leading order accuracy. Another contribution is the characterization of the VAWT wake as a function of turbine solidity. It was found that three distinct regions of flow exist in the VAWT wake: (1) the near wake, where periodic blade shedding of vorticity dominates; (2) a transition region, where growth of a shear-layer instability occurs; (3) the far wake, where bluff-body oscillations dominate. The wake transition can be predicted using a new parameter, the dynamic solidity, which establishes a quantitative connection between the wake of a VAWT and that of a circular cylinder. The results provide insight into the mechanism of the VAWT wake recovery and the potential means to control it.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.