5 resultados para Monte-Carlo approach
em CaltechTHESIS
Resumo:
We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.
We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.
Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
G-protein coupled receptors (GPCRs) form a large family of proteins and are very important drug targets. They are membrane proteins, which makes computational prediction of their structure challenging. Homology modeling is further complicated by low sequence similarly of the GPCR superfamily.
In this dissertation, we analyze the conserved inter-helical contacts of recently solved crystal structures, and we develop a unified sequence-structural alignment of the GPCR superfamily. We use this method to align 817 human GPCRs, 399 of which are nonolfactory. This alignment can be used to generate high quality homology models for the 817 GPCRs.
To refine the provided GPCR homology models we developed the Trihelix sampling method. We use a multi-scale approach to simplify the problem by treating the transmembrane helices as rigid bodies. In contrast to Monte Carlo structure prediction methods, the Trihelix method does a complete local sampling using discretized coordinates for the transmembrane helices. We validate the method on existing structures and apply it to predict the structure of the lactate receptor, HCAR1. For this receptor, we also build extracellular loops by taking into account constraints from three disulfide bonds. Docking of lactate and 3,5-dihydroxybenzoic acid shows likely involvement of three Arg residues on different transmembrane helices in binding a single ligand molecule.
Protein structure prediction relies on accurate force fields. We next present an effort to improve the quality of charge assignment for large atomic models. In particular, we introduce the formalism of the polarizable charge equilibration scheme (PQEQ) and we describe its implementation in the molecular simulation package Lammps. PQEQ allows fast on the fly charge assignment even for reactive force fields.
Resumo:
The topological phases of matter have been a major part of condensed matter physics research since the discovery of the quantum Hall effect in the 1980s. Recently, much of this research has focused on the study of systems of free fermions, such as the integer quantum Hall effect, quantum spin Hall effect, and topological insulator. Though these free fermion systems can play host to a variety of interesting phenomena, the physics of interacting topological phases is even richer. Unfortunately, there is a shortage of theoretical tools that can be used to approach interacting problems. In this thesis I will discuss progress in using two different numerical techniques to study topological phases.
Recently much research in topological phases has focused on phases made up of bosons. Unlike fermions, free bosons form a condensate and so interactions are vital if the bosons are to realize a topological phase. Since these phases are difficult to study, much of our understanding comes from exactly solvable models, such as Kitaev's toric code, as well as Levin-Wen and Walker-Wang models. We may want to study systems for which such exactly solvable models are not available. In this thesis I present a series of models which are not solvable exactly, but which can be studied in sign-free Monte Carlo simulations. The models work by binding charges to point topological defects. They can be used to realize bosonic interacting versions of the quantum Hall effect in 2D and topological insulator in 3D. Effective field theories of "integer" (non-fractionalized) versions of these phases were available in the literature, but our models also allow for the construction of fractional phases. We can measure a number of properties of the bulk and surface of these phases.
Few interacting topological phases have been realized experimentally, but there is one very important exception: the fractional quantum Hall effect (FQHE). Though the fractional quantum Hall effect we discovered over 30 years ago, it can still produce novel phenomena. Of much recent interest is the existence of non-Abelian anyons in FQHE systems. Though it is possible to construct wave functions that realize such particles, whether these wavefunctions are the ground state is a difficult quantitative question that must be answered numerically. In this thesis I describe progress using a density-matrix renormalization group algorithm to study a bilayer system thought to host non-Abelian anyons. We find phase diagrams in terms of experimentally relevant parameters, and also find evidence for a non-Abelian phase known as the "interlayer Pfaffian".
Resumo:
A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.
Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.
Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.