6 resultados para EFFICIENT SIMULATION

em CaltechTHESIS


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.

The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we investigate atomic scale imperfections and fluctuations in the quantum transport properties of novel semiconductor nanostructures. For this purpose, we have developed a numerically efficient supercell model of quantum transport capable of representing potential variations in three dimensions. This flexibility allows us to examine new quantum device structures made possible through state-of-the-art semiconductor fabrication techniques such as molecular beam epitaxy and nanolithography. These structures, with characteristic dimensions on the order of a few nanometers, hold promise for much smaller, faster and more efficient devices than those in present operation, yet they are highly sensitive to structural and compositional variations such as defect impurities, interface roughness and alloy disorder. If these quantum structures are to serve as components of reliable, mass-produced devices, these issues must be addressed.

In Chapter 1 we discuss some of the important issues in resonant tunneling devices and mention some of thier applications. In Chapters 2 and 3, we describe our supercell model of quantum transport and an efficient numerical implementation. In the remaining chapters, we present applications.

In Chapter 4, we examine transport in single and double barrier tunneling structures with neutral impurities. We find that an isolated attractive impurity in a single barrier can produce a transmission resonance whose position and strength are sensitive to the location of the impurity within the barrier. Multiple impurities can lead to a complex resonance structure that fluctuates widely with impurity configuration. In addition, impurity resonances can give rise to negative differential resistance. In Chapter 5, we study interface roughness and alloy disorder in double barrier structures. We find that interface roughness and alloy disorder can shift and broaden the n = 1 transmission resonance and give rise to new resonance peaks, especially in the presence of clusters comparable in size to the electron deBroglie wavelength. In Chapter 6 we examine the effects of interface roughness and impurities on transmission in a quantum dot electron waveguide. We find that variation in the configuration and stoichiometry of the interface roughness leads to substantial fluctuations in the transmission properties. These fluctuations are reduced by an attractive impurity placed near the center of the dot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high computational cost of correlated wavefunction theory (WFT) calculations has motivated the development of numerous methods to partition the description of large chemical systems into smaller subsystem calculations. For example, WFT-in-DFT embedding methods facilitate the partitioning of a system into two subsystems: a subsystem A that is treated using an accurate WFT method, and a subsystem B that is treated using a more efficient Kohn-Sham density functional theory (KS-DFT) method. Representation of the interactions between subsystems is non-trivial, and often requires the use of approximate kinetic energy functionals or computationally challenging optimized effective potential calculations; however, it has recently been shown that these challenges can be eliminated through the use of a projection operator. This dissertation describes the development and application of embedding methods that enable accurate and efficient calculation of the properties of large chemical systems.

Chapter 1 introduces a method for efficiently performing projection-based WFT-in-DFT embedding calculations on large systems. This is accomplished by using a truncated basis set representation of the subsystem A wavefunction. We show that naive truncation of the basis set associated with subsystem A can lead to large numerical artifacts, and present an approach for systematically controlling these artifacts.

Chapter 2 describes the application of the projection-based embedding method to investigate the oxidative stability of lithium-ion batteries. We study the oxidation potentials of mixtures of ethylene carbonate (EC) and dimethyl carbonate (DMC) by using the projection-based embedding method to calculate the vertical ionization energy (IE) of individual molecules at the CCSD(T) level of theory, while explicitly accounting for the solvent using DFT. Interestingly, we reveal that large contributions to the solvation properties of DMC originate from quadrupolar interactions, resulting in a much larger solvent reorganization energy than that predicted using simple dielectric continuum models. Demonstration that the solvation properties of EC and DMC are governed by fundamentally different intermolecular interactions provides insight into key aspects of lithium-ion batteries, with relevance to electrolyte decomposition processes, solid-electrolyte interphase formation, and the local solvation environment of lithium cations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.