5 resultados para Dynamical Monte Carlo

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.

We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.

Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:

i) the mean exit time

ii) the phase-space distribution of exit locations.

When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.

Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.

The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Disorder and interactions both play crucial roles in quantum transport. Decades ago, Mott showed that electron-electron interactions can lead to insulating behavior in materials that conventional band theory predicts to be conducting. Soon thereafter, Anderson demonstrated that disorder can localize a quantum particle through the wave interference phenomenon of Anderson localization. Although interactions and disorder both separately induce insulating behavior, the interplay of these two ingredients is subtle and often leads to surprising behavior at the periphery of our current understanding. Modern experiments probe these phenomena in a variety of contexts (e.g. disordered superconductors, cold atoms, photonic waveguides, etc.); thus, theoretical and numerical advancements are urgently needed. In this thesis, we report progress on understanding two contexts in which the interplay of disorder and interactions is especially important.

The first is the so-called “dirty” or random boson problem. In the past decade, a strong-disorder renormalization group (SDRG) treatment by Altman, Kafri, Polkovnikov, and Refael has raised the possibility of a new unstable fixed point governing the superfluid-insulator transition in the one-dimensional dirty boson problem. This new critical behavior may take over from the weak-disorder criticality of Giamarchi and Schulz when disorder is sufficiently strong. We analytically determine the scaling of the superfluid susceptibility at the strong-disorder fixed point and connect our analysis to recent Monte Carlo simulations by Hrahsheh and Vojta. We then shift our attention to two dimensions and use a numerical implementation of the SDRG to locate the fixed point governing the superfluid-insulator transition there. We identify several universal properties of this transition, which are fully independent of the microscopic features of the disorder.

The second focus of this thesis is the interplay of localization and interactions in systems with high energy density (i.e., far from the usual low energy limit of condensed matter physics). Recent theoretical and numerical work indicates that localization can survive in this regime, provided that interactions are sufficiently weak. Stronger interactions can destroy localization, leading to a so-called many-body localization transition. This dynamical phase transition is relevant to questions of thermalization in isolated quantum systems: it separates a many-body localized phase, in which localization prevents transport and thermalization, from a conducting (“ergodic”) phase in which the usual assumptions of quantum statistical mechanics hold. Here, we present evidence that many-body localization also occurs in quasiperiodic systems that lack true disorder.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.

The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.